Mar 6 01:46:55.302799 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:31:42 -00 2026 Mar 6 01:46:55.302831 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:46:55.302901 kernel: BIOS-provided physical RAM map: Mar 6 01:46:55.302911 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 6 01:46:55.302920 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 6 01:46:55.302929 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 6 01:46:55.302940 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 6 01:46:55.302950 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 6 01:46:55.302960 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 6 01:46:55.302974 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 6 01:46:55.302983 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 6 01:46:55.302993 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 6 01:46:55.303033 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 6 01:46:55.303046 kernel: NX (Execute Disable) protection: active Mar 6 01:46:55.303059 kernel: APIC: Static calls initialized Mar 6 01:46:55.303105 kernel: SMBIOS 2.8 present. Mar 6 01:46:55.303119 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 6 01:46:55.303220 kernel: Hypervisor detected: KVM Mar 6 01:46:55.303233 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 01:46:55.303244 kernel: kvm-clock: using sched offset of 8688366581 cycles Mar 6 01:46:55.303255 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 01:46:55.303265 kernel: tsc: Detected 2445.424 MHz processor Mar 6 01:46:55.303276 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 01:46:55.303286 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 01:46:55.303305 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 6 01:46:55.303315 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 6 01:46:55.303325 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 01:46:55.303335 kernel: Using GB pages for direct mapping Mar 6 01:46:55.303346 kernel: ACPI: Early table checksum verification disabled Mar 6 01:46:55.303356 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 6 01:46:55.303367 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:46:55.303377 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:46:55.303388 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:46:55.303403 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 6 01:46:55.303414 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:46:55.303424 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:46:55.303435 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:46:55.303446 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:46:55.303456 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 6 01:46:55.303467 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 6 01:46:55.303484 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 6 01:46:55.303499 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 6 01:46:55.303510 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 6 01:46:55.303522 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 6 01:46:55.303533 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 6 01:46:55.303544 kernel: No NUMA configuration found Mar 6 01:46:55.303556 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 6 01:46:55.303568 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 6 01:46:55.303586 kernel: Zone ranges: Mar 6 01:46:55.303597 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 01:46:55.303609 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 6 01:46:55.303621 kernel: Normal empty Mar 6 01:46:55.303632 kernel: Movable zone start for each node Mar 6 01:46:55.303643 kernel: Early memory node ranges Mar 6 01:46:55.303654 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 6 01:46:55.303665 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 6 01:46:55.303676 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 6 01:46:55.303692 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:46:55.303734 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 6 01:46:55.303748 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 6 01:46:55.303759 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 01:46:55.303770 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 01:46:55.303780 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 01:46:55.303792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 01:46:55.303802 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 01:46:55.303813 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 01:46:55.303830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 01:46:55.303879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 01:46:55.303891 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 01:46:55.303902 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 01:46:55.303914 kernel: TSC deadline timer available Mar 6 01:46:55.303924 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 6 01:46:55.303935 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 01:46:55.303947 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 01:46:55.303990 kernel: kvm-guest: setup PV sched yield Mar 6 01:46:55.304009 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 6 01:46:55.304020 kernel: Booting paravirtualized kernel on KVM Mar 6 01:46:55.304031 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 01:46:55.304043 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 01:46:55.304055 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 6 01:46:55.304068 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 6 01:46:55.304080 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 01:46:55.304092 kernel: kvm-guest: PV spinlocks enabled Mar 6 01:46:55.304103 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 01:46:55.304122 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:46:55.304283 kernel: random: crng init done Mar 6 01:46:55.304295 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 01:46:55.304306 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 01:46:55.304317 kernel: Fallback order for Node 0: 0 Mar 6 01:46:55.304328 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 6 01:46:55.304339 kernel: Policy zone: DMA32 Mar 6 01:46:55.304350 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 01:46:55.304369 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 6 01:46:55.304381 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 01:46:55.304391 kernel: ftrace: allocating 37996 entries in 149 pages Mar 6 01:46:55.304402 kernel: ftrace: allocated 149 pages with 4 groups Mar 6 01:46:55.304413 kernel: Dynamic Preempt: voluntary Mar 6 01:46:55.304424 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 01:46:55.304442 kernel: rcu: RCU event tracing is enabled. Mar 6 01:46:55.304454 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 01:46:55.304465 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 01:46:55.304481 kernel: Rude variant of Tasks RCU enabled. Mar 6 01:46:55.304492 kernel: Tracing variant of Tasks RCU enabled. Mar 6 01:46:55.304503 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 01:46:55.304514 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 01:46:55.304559 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 01:46:55.304572 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 01:46:55.304583 kernel: Console: colour VGA+ 80x25 Mar 6 01:46:55.304594 kernel: printk: console [ttyS0] enabled Mar 6 01:46:55.304605 kernel: ACPI: Core revision 20230628 Mar 6 01:46:55.304624 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 01:46:55.304636 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 01:46:55.304647 kernel: x2apic enabled Mar 6 01:46:55.304658 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 01:46:55.304669 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 01:46:55.304680 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 01:46:55.304691 kernel: kvm-guest: setup PV IPIs Mar 6 01:46:55.304703 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 01:46:55.304733 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 6 01:46:55.304745 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 6 01:46:55.304757 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 01:46:55.304768 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 01:46:55.304784 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 01:46:55.304796 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 01:46:55.304808 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 01:46:55.304819 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 01:46:55.304831 kernel: Speculative Store Bypass: Vulnerable Mar 6 01:46:55.304894 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 01:46:55.304934 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 01:46:55.304947 kernel: active return thunk: srso_alias_return_thunk Mar 6 01:46:55.304959 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 01:46:55.304971 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 01:46:55.304983 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 01:46:55.304994 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 01:46:55.305006 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 01:46:55.305024 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 01:46:55.305035 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 01:46:55.305047 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 01:46:55.305060 kernel: Freeing SMP alternatives memory: 32K Mar 6 01:46:55.305074 kernel: pid_max: default: 32768 minimum: 301 Mar 6 01:46:55.305087 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 6 01:46:55.305100 kernel: landlock: Up and running. Mar 6 01:46:55.305111 kernel: SELinux: Initializing. Mar 6 01:46:55.305178 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:46:55.305203 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:46:55.305216 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 01:46:55.305228 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:46:55.305241 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:46:55.305254 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:46:55.305268 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 01:46:55.305280 kernel: signal: max sigframe size: 1776 Mar 6 01:46:55.305318 kernel: rcu: Hierarchical SRCU implementation. Mar 6 01:46:55.305333 kernel: rcu: Max phase no-delay instances is 400. Mar 6 01:46:55.305351 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 01:46:55.305362 kernel: smp: Bringing up secondary CPUs ... Mar 6 01:46:55.305374 kernel: smpboot: x86: Booting SMP configuration: Mar 6 01:46:55.305386 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 01:46:55.305398 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 01:46:55.305411 kernel: smpboot: Max logical packages: 1 Mar 6 01:46:55.305423 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 6 01:46:55.305434 kernel: devtmpfs: initialized Mar 6 01:46:55.305445 kernel: x86/mm: Memory block size: 128MB Mar 6 01:46:55.305463 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 01:46:55.305477 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 01:46:55.305489 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 01:46:55.305501 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 01:46:55.305512 kernel: audit: initializing netlink subsys (disabled) Mar 6 01:46:55.305524 kernel: audit: type=2000 audit(1772761612.315:1): state=initialized audit_enabled=0 res=1 Mar 6 01:46:55.305535 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 01:46:55.305547 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 01:46:55.305590 kernel: cpuidle: using governor menu Mar 6 01:46:55.305609 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 01:46:55.305620 kernel: dca service started, version 1.12.1 Mar 6 01:46:55.305632 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 6 01:46:55.305644 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 6 01:46:55.305656 kernel: PCI: Using configuration type 1 for base access Mar 6 01:46:55.305668 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 01:46:55.305679 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 01:46:55.305691 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 01:46:55.305702 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 01:46:55.305720 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 01:46:55.305731 kernel: ACPI: Added _OSI(Module Device) Mar 6 01:46:55.305742 kernel: ACPI: Added _OSI(Processor Device) Mar 6 01:46:55.305754 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 01:46:55.305766 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 01:46:55.305778 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 6 01:46:55.305789 kernel: ACPI: Interpreter enabled Mar 6 01:46:55.305801 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 01:46:55.305812 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 01:46:55.305829 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 01:46:55.305887 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 01:46:55.305900 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 01:46:55.305911 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 01:46:55.306444 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 01:46:55.306680 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 01:46:55.306946 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 01:46:55.306972 kernel: PCI host bridge to bus 0000:00 Mar 6 01:46:55.307638 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 01:46:55.307788 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 01:46:55.307975 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 01:46:55.308236 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 6 01:46:55.308383 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 6 01:46:55.308586 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 6 01:46:55.308762 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 01:46:55.309043 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 6 01:46:55.309362 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 6 01:46:55.309519 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 6 01:46:55.309664 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 6 01:46:55.309810 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 6 01:46:55.309995 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 01:46:55.310334 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 6 01:46:55.310650 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 6 01:46:55.310889 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 6 01:46:55.311044 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 6 01:46:55.311382 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 6 01:46:55.311543 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 6 01:46:55.311704 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 6 01:46:55.311927 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 6 01:46:55.312226 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 6 01:46:55.312410 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 6 01:46:55.312581 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 6 01:46:55.312729 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 6 01:46:55.312997 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 6 01:46:55.313417 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 6 01:46:55.313644 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 01:46:55.313931 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 6 01:46:55.314113 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 6 01:46:55.314342 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 6 01:46:55.314574 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 6 01:46:55.314729 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 6 01:46:55.314745 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 01:46:55.314753 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 01:46:55.314760 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 01:46:55.314767 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 01:46:55.314773 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 01:46:55.314780 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 01:46:55.314787 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 01:46:55.314794 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 01:46:55.314801 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 01:46:55.314811 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 01:46:55.314817 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 01:46:55.314824 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 01:46:55.314831 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 01:46:55.314869 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 01:46:55.314877 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 01:46:55.314884 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 01:46:55.314890 kernel: iommu: Default domain type: Translated Mar 6 01:46:55.314897 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 01:46:55.314908 kernel: PCI: Using ACPI for IRQ routing Mar 6 01:46:55.314915 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 01:46:55.314922 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 6 01:46:55.314929 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 6 01:46:55.315105 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 01:46:55.315519 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 01:46:55.315821 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 01:46:55.315832 kernel: vgaarb: loaded Mar 6 01:46:55.315875 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 01:46:55.315883 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 01:46:55.315890 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 01:46:55.315896 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 01:46:55.315904 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 01:46:55.315910 kernel: pnp: PnP ACPI init Mar 6 01:46:55.316419 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 6 01:46:55.316435 kernel: pnp: PnP ACPI: found 6 devices Mar 6 01:46:55.316467 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 01:46:55.316475 kernel: NET: Registered PF_INET protocol family Mar 6 01:46:55.316482 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 01:46:55.316489 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 01:46:55.316496 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 01:46:55.316503 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 01:46:55.316510 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 01:46:55.316517 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 01:46:55.316524 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:46:55.316534 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:46:55.316540 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 01:46:55.316547 kernel: NET: Registered PF_XDP protocol family Mar 6 01:46:55.316694 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 01:46:55.316831 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 01:46:55.317028 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 01:46:55.317261 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 6 01:46:55.317486 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 6 01:46:55.317637 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 6 01:46:55.317647 kernel: PCI: CLS 0 bytes, default 64 Mar 6 01:46:55.317654 kernel: Initialise system trusted keyrings Mar 6 01:46:55.317661 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 01:46:55.317668 kernel: Key type asymmetric registered Mar 6 01:46:55.317675 kernel: Asymmetric key parser 'x509' registered Mar 6 01:46:55.317682 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 6 01:46:55.317688 kernel: io scheduler mq-deadline registered Mar 6 01:46:55.317696 kernel: io scheduler kyber registered Mar 6 01:46:55.317706 kernel: io scheduler bfq registered Mar 6 01:46:55.317713 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 01:46:55.317721 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 01:46:55.317728 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 01:46:55.317735 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 01:46:55.317742 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 01:46:55.317748 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 01:46:55.317755 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 01:46:55.317762 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 01:46:55.317772 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 01:46:55.318075 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 01:46:55.318092 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 01:46:55.318384 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 01:46:55.318608 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T01:46:54 UTC (1772761614) Mar 6 01:46:55.318826 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 6 01:46:55.318881 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 01:46:55.318894 kernel: NET: Registered PF_INET6 protocol family Mar 6 01:46:55.318934 kernel: Segment Routing with IPv6 Mar 6 01:46:55.318942 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 01:46:55.318949 kernel: NET: Registered PF_PACKET protocol family Mar 6 01:46:55.318956 kernel: Key type dns_resolver registered Mar 6 01:46:55.318963 kernel: IPI shorthand broadcast: enabled Mar 6 01:46:55.318986 kernel: sched_clock: Marking stable (2284150755, 498565549)->(3286728905, -504012601) Mar 6 01:46:55.318994 kernel: registered taskstats version 1 Mar 6 01:46:55.319001 kernel: Loading compiled-in X.509 certificates Mar 6 01:46:55.319008 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6d88f6264570591a57b3c9c1e1c99fca6c68b8ca' Mar 6 01:46:55.319019 kernel: Key type .fscrypt registered Mar 6 01:46:55.319026 kernel: Key type fscrypt-provisioning registered Mar 6 01:46:55.319107 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 01:46:55.319115 kernel: ima: Allocated hash algorithm: sha1 Mar 6 01:46:55.319122 kernel: ima: No architecture policies found Mar 6 01:46:55.319165 kernel: clk: Disabling unused clocks Mar 6 01:46:55.319172 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 6 01:46:55.319179 kernel: Write protecting the kernel read-only data: 36864k Mar 6 01:46:55.319186 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 6 01:46:55.319197 kernel: Run /init as init process Mar 6 01:46:55.319204 kernel: with arguments: Mar 6 01:46:55.319211 kernel: /init Mar 6 01:46:55.319218 kernel: with environment: Mar 6 01:46:55.319225 kernel: HOME=/ Mar 6 01:46:55.319232 kernel: TERM=linux Mar 6 01:46:55.319241 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:46:55.319250 systemd[1]: Detected virtualization kvm. Mar 6 01:46:55.319261 systemd[1]: Detected architecture x86-64. Mar 6 01:46:55.319268 systemd[1]: Running in initrd. Mar 6 01:46:55.319275 systemd[1]: No hostname configured, using default hostname. Mar 6 01:46:55.319282 systemd[1]: Hostname set to . Mar 6 01:46:55.319290 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:46:55.319297 systemd[1]: Queued start job for default target initrd.target. Mar 6 01:46:55.319305 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:46:55.319312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:46:55.319325 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 01:46:55.319367 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:46:55.319379 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 01:46:55.319386 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 01:46:55.319395 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 01:46:55.319403 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 01:46:55.319410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:46:55.319422 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:46:55.319429 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:46:55.319436 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:46:55.319444 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:46:55.319466 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:46:55.319476 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:46:55.319486 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:46:55.319494 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 01:46:55.319501 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 6 01:46:55.319509 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:46:55.319517 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:46:55.319524 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:46:55.319531 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:46:55.319539 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 01:46:55.319547 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:46:55.319557 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 01:46:55.319564 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 01:46:55.319572 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:46:55.319579 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:46:55.319587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:46:55.319620 systemd-journald[194]: Collecting audit messages is disabled. Mar 6 01:46:55.319642 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 01:46:55.319650 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:46:55.319657 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 01:46:55.319669 systemd-journald[194]: Journal started Mar 6 01:46:55.319684 systemd-journald[194]: Runtime Journal (/run/log/journal/bb061870189e42c6bd956a428f59b425) is 6.0M, max 48.4M, 42.3M free. Mar 6 01:46:55.316358 systemd-modules-load[196]: Inserted module 'overlay' Mar 6 01:46:55.329656 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:46:55.339515 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:46:55.603924 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 01:46:55.603973 kernel: Bridge firewalling registered Mar 6 01:46:55.603987 kernel: hrtimer: interrupt took 8756511 ns Mar 6 01:46:55.350937 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 6 01:46:55.619579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:46:55.627060 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:46:55.634061 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:46:55.642480 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:46:55.652996 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:46:55.675513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:46:55.676888 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:46:55.678533 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:46:55.702789 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:46:55.711911 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:46:55.720384 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:46:55.723625 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:46:55.730990 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 01:46:55.752230 dracut-cmdline[232]: dracut-dracut-053 Mar 6 01:46:55.755537 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:46:55.773079 systemd-resolved[228]: Positive Trust Anchors: Mar 6 01:46:55.773096 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:46:55.773229 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:46:55.776204 systemd-resolved[228]: Defaulting to hostname 'linux'. Mar 6 01:46:55.778469 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:46:55.785948 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:46:55.855253 kernel: SCSI subsystem initialized Mar 6 01:46:55.866208 kernel: Loading iSCSI transport class v2.0-870. Mar 6 01:46:55.879229 kernel: iscsi: registered transport (tcp) Mar 6 01:46:55.902254 kernel: iscsi: registered transport (qla4xxx) Mar 6 01:46:55.902337 kernel: QLogic iSCSI HBA Driver Mar 6 01:46:55.972422 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 01:46:55.987329 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 01:46:56.020232 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 01:46:56.020268 kernel: device-mapper: uevent: version 1.0.3 Mar 6 01:46:56.026888 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 6 01:46:56.078213 kernel: raid6: avx2x4 gen() 28127 MB/s Mar 6 01:46:56.096206 kernel: raid6: avx2x2 gen() 27767 MB/s Mar 6 01:46:56.115714 kernel: raid6: avx2x1 gen() 21004 MB/s Mar 6 01:46:56.115763 kernel: raid6: using algorithm avx2x4 gen() 28127 MB/s Mar 6 01:46:56.135303 kernel: raid6: .... xor() 4062 MB/s, rmw enabled Mar 6 01:46:56.135374 kernel: raid6: using avx2x2 recovery algorithm Mar 6 01:46:56.177471 kernel: xor: automatically using best checksumming function avx Mar 6 01:46:56.355413 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 01:46:56.373100 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:46:56.388377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:46:56.402973 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 6 01:46:56.408473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:46:56.422463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 01:46:56.438372 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Mar 6 01:46:56.477664 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:46:56.497428 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:46:56.605729 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:46:56.627381 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 01:46:56.654320 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 01:46:56.671499 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:46:56.687316 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 01:46:56.673397 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:46:56.694884 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:46:56.716450 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 01:46:56.721385 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 01:46:56.743682 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 01:46:56.744362 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 01:46:56.748330 kernel: GPT:9289727 != 19775487 Mar 6 01:46:56.748434 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 01:46:56.748450 kernel: GPT:9289727 != 19775487 Mar 6 01:46:56.748479 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 01:46:56.748495 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:46:56.788214 kernel: libata version 3.00 loaded. Mar 6 01:46:56.795083 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:46:56.795309 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:46:56.805243 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:46:56.963743 kernel: AVX2 version of gcm_enc/dec engaged. Mar 6 01:46:56.963782 kernel: BTRFS: device fsid eccec0b1-0068-4620-ab61-f332f16460fa devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (462) Mar 6 01:46:56.963799 kernel: AES CTR mode by8 optimization enabled Mar 6 01:46:56.963814 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 01:46:56.964359 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 01:46:56.935541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:46:56.977759 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Mar 6 01:46:56.977787 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 6 01:46:56.982892 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 01:46:56.936041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:46:56.955631 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:46:56.995657 kernel: scsi host0: ahci Mar 6 01:46:56.977345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:46:56.984364 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:46:57.000370 kernel: scsi host1: ahci Mar 6 01:46:57.003223 kernel: scsi host2: ahci Mar 6 01:46:57.005964 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 01:46:57.009176 kernel: scsi host3: ahci Mar 6 01:46:57.011184 kernel: scsi host4: ahci Mar 6 01:46:57.011189 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 01:46:57.011382 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 01:46:57.021666 kernel: scsi host5: ahci Mar 6 01:46:57.022032 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 6 01:46:57.022057 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 6 01:46:57.022076 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 6 01:46:57.022096 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 6 01:46:57.022114 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 6 01:46:57.022185 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 6 01:46:57.024663 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 01:46:57.196687 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:46:57.214373 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 01:46:57.214787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:46:57.230033 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:46:57.262728 disk-uuid[556]: Primary Header is updated. Mar 6 01:46:57.262728 disk-uuid[556]: Secondary Entries is updated. Mar 6 01:46:57.262728 disk-uuid[556]: Secondary Header is updated. Mar 6 01:46:57.271284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:46:57.271502 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:46:57.281597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:46:57.334675 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 01:46:57.334738 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 01:46:57.338205 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 01:46:57.338237 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 01:46:57.348356 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 01:46:57.354493 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 01:46:57.354536 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 01:46:57.354558 kernel: ata3.00: applying bridge limits Mar 6 01:46:57.358212 kernel: ata3.00: configured for UDMA/100 Mar 6 01:46:57.367210 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 01:46:57.434542 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 01:46:57.435068 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 01:46:57.448761 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 01:46:58.283236 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:46:58.284430 disk-uuid[562]: The operation has completed successfully. Mar 6 01:46:58.332486 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 01:46:58.332661 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 01:46:58.356563 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 01:46:58.361953 sh[593]: Success Mar 6 01:46:58.381231 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 6 01:46:58.435409 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 01:46:58.459951 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 01:46:58.465266 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 01:46:58.486027 kernel: BTRFS info (device dm-0): first mount of filesystem eccec0b1-0068-4620-ab61-f332f16460fa Mar 6 01:46:58.486079 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:46:58.486092 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 6 01:46:58.493468 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 6 01:46:58.493501 kernel: BTRFS info (device dm-0): using free space tree Mar 6 01:46:58.506049 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 01:46:58.509715 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 01:46:58.525348 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 01:46:58.530455 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 01:46:58.589834 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:46:58.590035 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:46:58.590049 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:46:58.599211 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:46:58.616661 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 6 01:46:58.623323 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:46:58.632888 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 01:46:58.649478 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 01:46:59.067542 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:46:59.080407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:46:59.109034 systemd-networkd[779]: lo: Link UP Mar 6 01:46:59.109064 systemd-networkd[779]: lo: Gained carrier Mar 6 01:46:59.111347 systemd-networkd[779]: Enumeration completed Mar 6 01:46:59.111481 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:46:59.114812 ignition[671]: Ignition 2.19.0 Mar 6 01:46:59.112237 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:46:59.114834 ignition[671]: Stage: fetch-offline Mar 6 01:46:59.112242 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:46:59.114975 ignition[671]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:46:59.114885 systemd-networkd[779]: eth0: Link UP Mar 6 01:46:59.114989 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:46:59.114890 systemd-networkd[779]: eth0: Gained carrier Mar 6 01:46:59.115369 ignition[671]: parsed url from cmdline: "" Mar 6 01:46:59.114900 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:46:59.115374 ignition[671]: no config URL provided Mar 6 01:46:59.125562 systemd[1]: Reached target network.target - Network. Mar 6 01:46:59.115381 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 01:46:59.135283 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.156/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:46:59.115392 ignition[671]: no config at "/usr/lib/ignition/user.ign" Mar 6 01:46:59.115458 ignition[671]: op(1): [started] loading QEMU firmware config module Mar 6 01:46:59.115464 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 01:46:59.133604 ignition[671]: op(1): [finished] loading QEMU firmware config module Mar 6 01:46:59.133630 ignition[671]: QEMU firmware config was not found. Ignoring... Mar 6 01:46:59.382048 ignition[671]: parsing config with SHA512: b53d529e5a18378b84987a1fe26450382a02959c753746beb9feae996fda9b9ce7b591cf5e2ba7ccb997947c77a83a98a198bf396cbb24f9a6765a023d53340a Mar 6 01:46:59.388718 unknown[671]: fetched base config from "system" Mar 6 01:46:59.388750 unknown[671]: fetched user config from "qemu" Mar 6 01:46:59.392635 ignition[671]: fetch-offline: fetch-offline passed Mar 6 01:46:59.390104 systemd-resolved[228]: Detected conflict on linux IN A 10.0.0.156 Mar 6 01:46:59.392710 ignition[671]: Ignition finished successfully Mar 6 01:46:59.390114 systemd-resolved[228]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Mar 6 01:46:59.395504 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:46:59.401230 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 01:46:59.419437 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 01:46:59.452506 ignition[785]: Ignition 2.19.0 Mar 6 01:46:59.459297 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 01:46:59.452514 ignition[785]: Stage: kargs Mar 6 01:46:59.452684 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:46:59.452698 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:46:59.453512 ignition[785]: kargs: kargs passed Mar 6 01:46:59.453568 ignition[785]: Ignition finished successfully Mar 6 01:46:59.483439 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 01:46:59.508193 ignition[793]: Ignition 2.19.0 Mar 6 01:46:59.508243 ignition[793]: Stage: disks Mar 6 01:46:59.508513 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:46:59.511254 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 01:46:59.508527 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:46:59.515265 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 01:46:59.509518 ignition[793]: disks: disks passed Mar 6 01:46:59.519942 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 01:46:59.509566 ignition[793]: Ignition finished successfully Mar 6 01:46:59.523237 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:46:59.528284 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:46:59.530977 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:46:59.543400 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 01:46:59.570267 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 6 01:46:59.575638 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 01:46:59.598281 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 01:46:59.731215 kernel: EXT4-fs (vda9): mounted filesystem 6fb83788-0471-4e89-b45f-3a7586a627a9 r/w with ordered data mode. Quota mode: none. Mar 6 01:46:59.733068 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 01:46:59.736081 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 01:46:59.764341 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:46:59.771600 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 01:46:59.795079 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Mar 6 01:46:59.795105 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:46:59.795116 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:46:59.801651 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:46:59.801699 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:46:59.776949 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 01:46:59.776998 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 01:46:59.777024 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:46:59.795075 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 01:46:59.807931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:46:59.810934 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 01:47:00.154504 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 01:47:00.163258 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 6 01:47:00.174970 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 01:47:00.182710 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 01:47:00.266476 systemd-networkd[779]: eth0: Gained IPv6LL Mar 6 01:47:00.408507 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 01:47:00.423305 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 01:47:00.428523 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 01:47:00.442604 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 01:47:00.459529 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:47:00.479510 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 01:47:00.955019 ignition[924]: INFO : Ignition 2.19.0 Mar 6 01:47:00.955019 ignition[924]: INFO : Stage: mount Mar 6 01:47:00.959558 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:47:00.959558 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:47:00.959558 ignition[924]: INFO : mount: mount passed Mar 6 01:47:00.959558 ignition[924]: INFO : Ignition finished successfully Mar 6 01:47:00.972989 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 01:47:00.993455 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 01:47:01.004883 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:47:01.028205 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 6 01:47:01.033901 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:47:01.033957 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:47:01.033979 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:47:01.051235 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:47:01.054300 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:47:01.092177 ignition[957]: INFO : Ignition 2.19.0 Mar 6 01:47:01.092177 ignition[957]: INFO : Stage: files Mar 6 01:47:01.098587 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:47:01.098587 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:47:01.098587 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 6 01:47:01.098587 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 01:47:01.098587 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 01:47:01.118468 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 01:47:01.118468 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 01:47:01.118468 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 01:47:01.118468 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:47:01.118468 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 01:47:01.102804 unknown[957]: wrote ssh authorized keys file for user: core Mar 6 01:47:01.193967 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 01:47:01.344047 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:47:01.344047 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 01:47:01.360778 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 6 01:47:01.842635 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 6 01:47:04.316655 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 6 01:47:04.316655 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 6 01:47:04.333979 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:47:04.333979 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:47:04.333979 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 6 01:47:04.333979 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 6 01:47:04.333979 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:47:04.333979 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:47:04.333979 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 6 01:47:04.333979 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 01:47:04.380301 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:47:04.387757 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:47:04.395385 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 01:47:04.395385 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 6 01:47:04.395385 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 01:47:04.395385 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:47:04.395385 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:47:04.395385 ignition[957]: INFO : files: files passed Mar 6 01:47:04.395385 ignition[957]: INFO : Ignition finished successfully Mar 6 01:47:04.391535 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 01:47:04.420458 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 01:47:04.428113 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 01:47:04.434077 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 01:47:04.434361 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 01:47:04.453927 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 01:47:04.461221 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:47:04.461221 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:47:04.455206 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:47:04.482267 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:47:04.461483 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 01:47:04.469273 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 01:47:04.508295 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 01:47:04.510935 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 01:47:04.520335 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 01:47:04.527507 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 01:47:04.535592 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 01:47:04.556413 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 01:47:04.578072 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:47:04.595373 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 01:47:04.609669 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:47:04.616207 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:47:04.622697 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 01:47:04.627619 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 01:47:04.630383 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:47:04.637167 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 01:47:04.644709 systemd[1]: Stopped target basic.target - Basic System. Mar 6 01:47:04.650082 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 01:47:04.658464 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:47:04.667563 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 01:47:04.674767 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 01:47:04.680676 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:47:04.692619 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 01:47:04.701242 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 01:47:04.708827 systemd[1]: Stopped target swap.target - Swaps. Mar 6 01:47:04.714899 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 01:47:04.719011 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:47:04.728331 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:47:04.737336 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:47:04.747836 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 01:47:04.750955 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:47:04.759679 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 01:47:04.763073 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 01:47:04.772052 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 01:47:04.775822 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:47:04.783747 systemd[1]: Stopped target paths.target - Path Units. Mar 6 01:47:04.788640 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 01:47:04.789067 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:47:04.798574 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 01:47:04.798764 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 01:47:04.809384 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 01:47:04.809534 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:47:04.813071 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 01:47:04.813210 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:47:04.823441 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 01:47:04.823558 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:47:04.826562 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 01:47:04.826663 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 01:47:04.861369 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 01:47:04.865272 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 01:47:04.865418 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:47:04.874760 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 01:47:04.885621 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 01:47:04.885820 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:47:04.892568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 01:47:04.892776 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:47:04.915308 ignition[1012]: INFO : Ignition 2.19.0 Mar 6 01:47:04.915308 ignition[1012]: INFO : Stage: umount Mar 6 01:47:04.915308 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:47:04.915308 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:47:04.915308 ignition[1012]: INFO : umount: umount passed Mar 6 01:47:04.915308 ignition[1012]: INFO : Ignition finished successfully Mar 6 01:47:04.910383 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 01:47:04.912697 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 01:47:04.912835 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 01:47:04.921429 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 01:47:04.921633 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 01:47:04.927570 systemd[1]: Stopped target network.target - Network. Mar 6 01:47:04.934341 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 01:47:04.934434 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 01:47:04.936943 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 01:47:04.937001 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 01:47:04.944757 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 01:47:04.944820 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 01:47:04.952076 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 01:47:04.952206 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 01:47:04.954445 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 01:47:04.954556 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 01:47:04.959711 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 01:47:04.964930 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 01:47:04.978689 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 01:47:04.978947 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 01:47:04.981203 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 01:47:04.981267 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:47:04.991275 systemd-networkd[779]: eth0: DHCPv6 lease lost Mar 6 01:47:04.993914 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 01:47:04.994057 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 01:47:04.999741 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 01:47:04.999979 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 01:47:05.003557 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 01:47:05.003639 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:47:05.028418 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 01:47:05.028578 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 01:47:05.028656 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:47:05.034428 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 01:47:05.034494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:47:05.040333 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 01:47:05.040460 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 01:47:05.046273 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:47:05.063465 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 01:47:05.063844 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:47:05.070348 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 01:47:05.070591 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 01:47:05.076534 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 01:47:05.076640 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 01:47:05.081677 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 01:47:05.081748 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:47:05.088370 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 01:47:05.088455 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:47:05.094368 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 01:47:05.094492 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 01:47:05.101069 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:47:05.101194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:47:05.120552 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 01:47:05.126017 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 01:47:05.126102 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:47:05.132205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:47:05.132270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:47:05.204626 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 6 01:47:05.139938 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 01:47:05.140253 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 01:47:05.148360 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 01:47:05.156572 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 01:47:05.172261 systemd[1]: Switching root. Mar 6 01:47:05.219232 systemd-journald[194]: Journal stopped Mar 6 01:47:07.265538 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 01:47:07.265646 kernel: SELinux: policy capability open_perms=1 Mar 6 01:47:07.265679 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 01:47:07.265706 kernel: SELinux: policy capability always_check_network=0 Mar 6 01:47:07.265732 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 01:47:07.265751 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 01:47:07.265777 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 01:47:07.265794 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 01:47:07.265812 kernel: audit: type=1403 audit(1772761625.524:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 01:47:07.265837 systemd[1]: Successfully loaded SELinux policy in 157.915ms. Mar 6 01:47:07.265914 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 140.580ms. Mar 6 01:47:07.265939 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:47:07.265959 systemd[1]: Detected virtualization kvm. Mar 6 01:47:07.265979 systemd[1]: Detected architecture x86-64. Mar 6 01:47:07.265997 systemd[1]: Detected first boot. Mar 6 01:47:07.266028 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:47:07.266052 zram_generator::config[1055]: No configuration found. Mar 6 01:47:07.266075 systemd[1]: Populated /etc with preset unit settings. Mar 6 01:47:07.266095 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 01:47:07.266112 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 01:47:07.266204 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 01:47:07.266234 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 01:47:07.266266 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 01:47:07.266288 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 01:47:07.266309 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 01:47:07.266331 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 01:47:07.266352 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 01:47:07.266375 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 01:47:07.266407 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 01:47:07.266428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:47:07.266451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:47:07.266475 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 01:47:07.266497 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 01:47:07.266519 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 01:47:07.266539 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:47:07.266561 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 01:47:07.266584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:47:07.266606 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 01:47:07.266632 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 01:47:07.266651 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 01:47:07.266677 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 01:47:07.266697 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:47:07.266718 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:47:07.266740 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:47:07.266759 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:47:07.266779 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 01:47:07.266798 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 01:47:07.266816 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:47:07.266840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:47:07.266902 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:47:07.266922 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 01:47:07.266939 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 01:47:07.266964 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 01:47:07.266982 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 01:47:07.267001 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:47:07.267019 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 01:47:07.267036 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 01:47:07.267056 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 01:47:07.267074 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 01:47:07.267093 systemd[1]: Reached target machines.target - Containers. Mar 6 01:47:07.267110 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 01:47:07.267177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:47:07.267196 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:47:07.267214 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 01:47:07.267232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:47:07.267255 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:47:07.267272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:47:07.267289 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 01:47:07.267307 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:47:07.267325 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 01:47:07.267346 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 01:47:07.267368 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 01:47:07.267385 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 01:47:07.267402 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 01:47:07.267424 kernel: loop: module loaded Mar 6 01:47:07.267441 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:47:07.267458 kernel: fuse: init (API version 7.39) Mar 6 01:47:07.267474 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:47:07.267492 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 01:47:07.267509 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 01:47:07.267528 kernel: ACPI: bus type drm_connector registered Mar 6 01:47:07.267576 systemd-journald[1139]: Collecting audit messages is disabled. Mar 6 01:47:07.267622 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:47:07.267646 systemd-journald[1139]: Journal started Mar 6 01:47:07.267679 systemd-journald[1139]: Runtime Journal (/run/log/journal/bb061870189e42c6bd956a428f59b425) is 6.0M, max 48.4M, 42.3M free. Mar 6 01:47:06.693969 systemd[1]: Queued start job for default target multi-user.target. Mar 6 01:47:06.728585 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 01:47:06.729500 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 01:47:06.730224 systemd[1]: systemd-journald.service: Consumed 1.690s CPU time. Mar 6 01:47:07.273899 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 01:47:07.273984 systemd[1]: Stopped verity-setup.service. Mar 6 01:47:07.282215 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:47:07.287459 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:47:07.290846 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 01:47:07.294410 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 01:47:07.297970 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 01:47:07.301407 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 01:47:07.304477 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 01:47:07.307588 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 01:47:07.310950 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 01:47:07.315304 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:47:07.320376 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 01:47:07.320761 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 01:47:07.324824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:47:07.325327 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:47:07.329243 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:47:07.329574 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:47:07.333115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:47:07.333511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:47:07.337446 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 01:47:07.337749 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 01:47:07.341745 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:47:07.342043 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:47:07.346791 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:47:07.350357 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 01:47:07.354811 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 01:47:07.372805 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 01:47:07.390333 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 01:47:07.395957 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 01:47:07.399496 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 01:47:07.399552 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:47:07.404550 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 6 01:47:07.411553 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 01:47:07.417422 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 01:47:07.420812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:47:07.423305 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 01:47:07.430377 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 01:47:07.434788 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:47:07.436285 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 01:47:07.439941 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:47:07.453397 systemd-journald[1139]: Time spent on flushing to /var/log/journal/bb061870189e42c6bd956a428f59b425 is 24.061ms for 942 entries. Mar 6 01:47:07.453397 systemd-journald[1139]: System Journal (/var/log/journal/bb061870189e42c6bd956a428f59b425) is 8.0M, max 195.6M, 187.6M free. Mar 6 01:47:07.502710 systemd-journald[1139]: Received client request to flush runtime journal. Mar 6 01:47:07.444323 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:47:07.450408 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 01:47:07.458686 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 01:47:07.469958 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:47:07.478632 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 01:47:07.487966 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 01:47:07.496750 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 01:47:07.506102 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 01:47:07.511358 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 01:47:07.520555 kernel: loop0: detected capacity change from 0 to 217752 Mar 6 01:47:07.526790 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 01:47:08.167622 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 01:47:08.168693 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 6 01:47:08.179540 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 6 01:47:08.184113 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:47:08.188320 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 01:47:08.208667 kernel: loop1: detected capacity change from 0 to 142488 Mar 6 01:47:08.208341 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:47:08.229680 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 01:47:08.238416 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 6 01:47:08.296825 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 6 01:47:08.332310 kernel: loop2: detected capacity change from 0 to 140768 Mar 6 01:47:08.337670 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 6 01:47:08.337703 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 6 01:47:08.362527 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:47:08.405189 kernel: loop3: detected capacity change from 0 to 217752 Mar 6 01:47:08.434226 kernel: loop4: detected capacity change from 0 to 142488 Mar 6 01:47:08.464260 kernel: loop5: detected capacity change from 0 to 140768 Mar 6 01:47:08.498287 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 01:47:08.499389 (sd-merge)[1194]: Merged extensions into '/usr'. Mar 6 01:47:08.509072 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 01:47:08.509115 systemd[1]: Reloading... Mar 6 01:47:08.595200 zram_generator::config[1221]: No configuration found. Mar 6 01:47:08.622961 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 01:47:08.730927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:47:08.776835 systemd[1]: Reloading finished in 266 ms. Mar 6 01:47:08.814884 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 01:47:08.818659 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 01:47:08.822553 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 01:47:08.844457 systemd[1]: Starting ensure-sysext.service... Mar 6 01:47:08.848977 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:47:08.853652 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:47:08.858406 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 6 01:47:08.858440 systemd[1]: Reloading... Mar 6 01:47:08.878284 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 01:47:08.878656 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 01:47:08.879772 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 01:47:08.880085 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 6 01:47:08.880774 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 6 01:47:08.887366 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:47:08.887458 systemd-tmpfiles[1259]: Skipping /boot Mar 6 01:47:08.895792 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Mar 6 01:47:08.908062 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:47:08.908096 systemd-tmpfiles[1259]: Skipping /boot Mar 6 01:47:08.918185 zram_generator::config[1292]: No configuration found. Mar 6 01:47:09.360174 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1306) Mar 6 01:47:09.385654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:47:09.455204 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 6 01:47:09.455931 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 01:47:09.455989 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:47:09.460321 systemd[1]: Reloading finished in 601 ms. Mar 6 01:47:09.467241 kernel: ACPI: button: Power Button [PWRF] Mar 6 01:47:09.482084 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:47:09.554981 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:47:09.580475 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 01:47:09.614643 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 01:47:09.615249 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 6 01:47:09.615920 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 01:47:09.630171 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 01:47:09.664227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:47:09.722929 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:47:09.736562 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 01:47:09.740953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:47:09.747566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:47:09.766554 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:47:09.776593 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:47:09.784060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:47:09.791780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:47:09.809618 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 01:47:09.855445 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 01:47:09.867107 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:47:09.882177 kernel: kvm_amd: TSC scaling supported Mar 6 01:47:09.882246 kernel: kvm_amd: Nested Virtualization enabled Mar 6 01:47:09.882284 kernel: kvm_amd: Nested Paging enabled Mar 6 01:47:09.882298 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 01:47:09.882334 kernel: kvm_amd: PMU virtualization is disabled Mar 6 01:47:09.878434 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:47:09.943429 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 01:47:09.948447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:47:09.951421 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:47:09.952893 systemd[1]: Finished ensure-sysext.service. Mar 6 01:47:09.956274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:47:09.956497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:47:09.959727 augenrules[1382]: No rules Mar 6 01:47:09.961026 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:47:09.964514 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:47:09.964795 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:47:10.061382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:47:10.061588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:47:10.063212 kernel: EDAC MC: Ver: 3.0.0 Mar 6 01:47:10.066772 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 01:47:10.070833 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:47:10.071063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:47:10.074692 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 01:47:10.078653 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 01:47:10.093516 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:47:10.093774 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:47:10.105468 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 01:47:10.107894 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 01:47:10.111482 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 01:47:10.111687 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 01:47:10.112453 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 01:47:10.121944 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 6 01:47:10.271009 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 6 01:47:10.271888 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 01:47:10.307543 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:47:10.310804 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 01:47:10.409255 systemd-resolved[1376]: Positive Trust Anchors: Mar 6 01:47:10.409274 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:47:10.409304 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:47:10.416644 systemd-resolved[1376]: Defaulting to hostname 'linux'. Mar 6 01:47:10.418455 systemd-networkd[1374]: lo: Link UP Mar 6 01:47:10.418487 systemd-networkd[1374]: lo: Gained carrier Mar 6 01:47:10.420805 systemd-networkd[1374]: Enumeration completed Mar 6 01:47:10.421814 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:47:10.421843 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:47:10.423028 systemd-networkd[1374]: eth0: Link UP Mar 6 01:47:10.423040 systemd-networkd[1374]: eth0: Gained carrier Mar 6 01:47:10.423054 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:47:10.440254 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.156/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:47:10.441903 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Mar 6 01:47:10.954004 systemd-resolved[1376]: Clock change detected. Flushing caches. Mar 6 01:47:10.954078 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 01:47:10.954264 systemd-timesyncd[1397]: Initial clock synchronization to Fri 2026-03-06 01:47:10.953921 UTC. Mar 6 01:47:10.974579 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 01:47:10.980305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:47:10.983447 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:47:10.986893 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:47:10.990495 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 6 01:47:10.996846 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:47:10.999892 systemd[1]: Reached target network.target - Network. Mar 6 01:47:11.008232 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:47:11.021095 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:47:11.024681 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 01:47:11.028003 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 01:47:11.031371 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 01:47:11.034739 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 01:47:11.034786 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:47:11.037258 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 01:47:11.040289 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 01:47:11.043236 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 01:47:11.046530 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:47:11.050160 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 01:47:11.057574 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 01:47:11.072343 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 01:47:11.076963 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 6 01:47:11.082628 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 01:47:11.086663 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 01:47:11.089917 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:47:11.092549 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:47:11.092930 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:47:11.095081 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:47:11.095192 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:47:11.096898 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 01:47:11.103396 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 01:47:11.113253 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 01:47:11.118354 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 01:47:11.121468 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 01:47:11.124515 jq[1426]: false Mar 6 01:47:11.125347 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 01:47:11.132014 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 01:47:11.136372 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 01:47:11.141396 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 01:47:11.181969 extend-filesystems[1427]: Found loop3 Mar 6 01:47:11.188325 extend-filesystems[1427]: Found loop4 Mar 6 01:47:11.188325 extend-filesystems[1427]: Found loop5 Mar 6 01:47:11.188325 extend-filesystems[1427]: Found sr0 Mar 6 01:47:11.188325 extend-filesystems[1427]: Found vda Mar 6 01:47:11.188325 extend-filesystems[1427]: Found vda1 Mar 6 01:47:11.188325 extend-filesystems[1427]: Found vda2 Mar 6 01:47:11.188325 extend-filesystems[1427]: Found vda3 Mar 6 01:47:11.188325 extend-filesystems[1427]: Found usr Mar 6 01:47:11.188325 extend-filesystems[1427]: Found vda4 Mar 6 01:47:11.188325 extend-filesystems[1427]: Found vda6 Mar 6 01:47:11.188325 extend-filesystems[1427]: Found vda7 Mar 6 01:47:11.188325 extend-filesystems[1427]: Found vda9 Mar 6 01:47:11.188325 extend-filesystems[1427]: Checking size of /dev/vda9 Mar 6 01:47:11.295953 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 01:47:11.311838 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1314) Mar 6 01:47:11.311933 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 01:47:11.215569 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 01:47:11.203345 dbus-daemon[1425]: [system] SELinux support is enabled Mar 6 01:47:11.314270 extend-filesystems[1427]: Resized partition /dev/vda9 Mar 6 01:47:11.220758 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 01:47:11.426766 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Mar 6 01:47:11.221355 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 01:47:11.589014 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 01:47:11.589014 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 01:47:11.589014 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 01:47:11.225340 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 01:47:11.616277 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Mar 6 01:47:11.625322 update_engine[1444]: I20260306 01:47:11.308681 1444 main.cc:92] Flatcar Update Engine starting Mar 6 01:47:11.625322 update_engine[1444]: I20260306 01:47:11.311096 1444 update_check_scheduler.cc:74] Next update check in 11m28s Mar 6 01:47:11.240387 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 01:47:11.626422 jq[1446]: true Mar 6 01:47:11.253739 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 01:47:11.266552 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 6 01:47:11.626943 jq[1452]: true Mar 6 01:47:11.291689 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 01:47:11.291967 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 01:47:11.292421 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 01:47:11.292653 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 01:47:11.298776 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 01:47:11.299203 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 01:47:11.589657 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 01:47:11.589926 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 01:47:11.607109 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 01:47:11.614458 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Mar 6 01:47:11.614486 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 01:47:11.614794 systemd-logind[1442]: New seat seat0. Mar 6 01:47:11.617684 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 01:47:11.632667 dbus-daemon[1425]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 6 01:47:11.639162 tar[1451]: linux-amd64/LICENSE Mar 6 01:47:11.639162 tar[1451]: linux-amd64/helm Mar 6 01:47:11.650431 systemd[1]: Started update-engine.service - Update Engine. Mar 6 01:47:11.660023 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 01:47:11.666163 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 01:47:11.667349 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 01:47:11.671936 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 01:47:11.672113 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 01:47:11.776364 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 01:47:11.823348 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Mar 6 01:47:11.825954 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 01:47:11.830634 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 01:47:11.929397 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 01:47:11.980920 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 01:47:12.106969 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 01:47:12.130418 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 01:47:12.135413 systemd[1]: Started sshd@0-10.0.0.156:22-10.0.0.1:55354.service - OpenSSH per-connection server daemon (10.0.0.1:55354). Mar 6 01:47:12.139588 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 01:47:12.139841 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 01:47:12.189339 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 01:47:12.310850 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 01:47:12.353851 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 01:47:12.360790 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 01:47:12.365033 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 01:47:12.388088 sshd[1500]: Accepted publickey for core from 10.0.0.1 port 55354 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:47:12.391238 sshd[1500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:47:12.403523 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 01:47:12.411446 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 01:47:12.418568 systemd-logind[1442]: New session 1 of user core. Mar 6 01:47:12.432420 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 01:47:12.441085 containerd[1454]: time="2026-03-06T01:47:12.440780428Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 6 01:47:12.443511 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 01:47:12.462357 (systemd)[1516]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 01:47:12.476642 containerd[1454]: time="2026-03-06T01:47:12.476550412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:47:12.480607 containerd[1454]: time="2026-03-06T01:47:12.480395626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:47:12.480607 containerd[1454]: time="2026-03-06T01:47:12.480576935Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 6 01:47:12.480708 containerd[1454]: time="2026-03-06T01:47:12.480610107Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 6 01:47:12.481077 containerd[1454]: time="2026-03-06T01:47:12.481012928Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 6 01:47:12.481180 containerd[1454]: time="2026-03-06T01:47:12.481073141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 6 01:47:12.481322 containerd[1454]: time="2026-03-06T01:47:12.481251073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:47:12.481322 containerd[1454]: time="2026-03-06T01:47:12.481307779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:47:12.481716 containerd[1454]: time="2026-03-06T01:47:12.481648145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:47:12.481716 containerd[1454]: time="2026-03-06T01:47:12.481698288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 6 01:47:12.481716 containerd[1454]: time="2026-03-06T01:47:12.481714549Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:47:12.481810 containerd[1454]: time="2026-03-06T01:47:12.481725940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 6 01:47:12.482059 containerd[1454]: time="2026-03-06T01:47:12.481995503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:47:12.482476 containerd[1454]: time="2026-03-06T01:47:12.482421158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:47:12.482733 containerd[1454]: time="2026-03-06T01:47:12.482672217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:47:12.482793 containerd[1454]: time="2026-03-06T01:47:12.482731197Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 6 01:47:12.483014 containerd[1454]: time="2026-03-06T01:47:12.482958471Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 6 01:47:12.483186 containerd[1454]: time="2026-03-06T01:47:12.483091310Z" level=info msg="metadata content store policy set" policy=shared Mar 6 01:47:12.490924 containerd[1454]: time="2026-03-06T01:47:12.490819822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 6 01:47:12.491174 containerd[1454]: time="2026-03-06T01:47:12.491058727Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 6 01:47:12.491206 containerd[1454]: time="2026-03-06T01:47:12.491172049Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 6 01:47:12.491227 containerd[1454]: time="2026-03-06T01:47:12.491205071Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 6 01:47:12.491281 containerd[1454]: time="2026-03-06T01:47:12.491229156Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 6 01:47:12.491503 containerd[1454]: time="2026-03-06T01:47:12.491448726Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 6 01:47:12.491942 containerd[1454]: time="2026-03-06T01:47:12.491889690Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 6 01:47:12.492214 containerd[1454]: time="2026-03-06T01:47:12.492102957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 6 01:47:12.492243 containerd[1454]: time="2026-03-06T01:47:12.492223703Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 6 01:47:12.492301 containerd[1454]: time="2026-03-06T01:47:12.492251304Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 6 01:47:12.492358 containerd[1454]: time="2026-03-06T01:47:12.492315805Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 6 01:47:12.492358 containerd[1454]: time="2026-03-06T01:47:12.492345801Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 6 01:47:12.492396 containerd[1454]: time="2026-03-06T01:47:12.492366901Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 6 01:47:12.492414 containerd[1454]: time="2026-03-06T01:47:12.492389572Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 6 01:47:12.492433 containerd[1454]: time="2026-03-06T01:47:12.492416393Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 6 01:47:12.492452 containerd[1454]: time="2026-03-06T01:47:12.492440668Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 6 01:47:12.492520 containerd[1454]: time="2026-03-06T01:47:12.492463521Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 6 01:47:12.492520 containerd[1454]: time="2026-03-06T01:47:12.492485552Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 6 01:47:12.492600 containerd[1454]: time="2026-03-06T01:47:12.492545213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492622 containerd[1454]: time="2026-03-06T01:47:12.492605676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492669 containerd[1454]: time="2026-03-06T01:47:12.492629981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492716 containerd[1454]: time="2026-03-06T01:47:12.492665939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492716 containerd[1454]: time="2026-03-06T01:47:12.492691667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492752 containerd[1454]: time="2026-03-06T01:47:12.492715291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492752 containerd[1454]: time="2026-03-06T01:47:12.492737643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492784 containerd[1454]: time="2026-03-06T01:47:12.492761367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492803 containerd[1454]: time="2026-03-06T01:47:12.492790612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492846 containerd[1454]: time="2026-03-06T01:47:12.492819916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492906 containerd[1454]: time="2026-03-06T01:47:12.492841637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492927 containerd[1454]: time="2026-03-06T01:47:12.492902811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.492993 containerd[1454]: time="2026-03-06T01:47:12.492943187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.493067 containerd[1454]: time="2026-03-06T01:47:12.493002317Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 6 01:47:12.493091 containerd[1454]: time="2026-03-06T01:47:12.493070404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.493166 containerd[1454]: time="2026-03-06T01:47:12.493096252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.493581 containerd[1454]: time="2026-03-06T01:47:12.493117071Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 6 01:47:12.493891 containerd[1454]: time="2026-03-06T01:47:12.493801559Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 6 01:47:12.494244 containerd[1454]: time="2026-03-06T01:47:12.494061044Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 6 01:47:12.494380 containerd[1454]: time="2026-03-06T01:47:12.494244917Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 6 01:47:12.495528 containerd[1454]: time="2026-03-06T01:47:12.494371584Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 6 01:47:12.495528 containerd[1454]: time="2026-03-06T01:47:12.494475448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.495528 containerd[1454]: time="2026-03-06T01:47:12.494582498Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 6 01:47:12.495528 containerd[1454]: time="2026-03-06T01:47:12.494695258Z" level=info msg="NRI interface is disabled by configuration." Mar 6 01:47:12.495528 containerd[1454]: time="2026-03-06T01:47:12.494718612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 6 01:47:12.495851 containerd[1454]: time="2026-03-06T01:47:12.495711556Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 6 01:47:12.495851 containerd[1454]: time="2026-03-06T01:47:12.495833294Z" level=info msg="Connect containerd service" Mar 6 01:47:12.496109 containerd[1454]: time="2026-03-06T01:47:12.495956243Z" level=info msg="using legacy CRI server" Mar 6 01:47:12.496109 containerd[1454]: time="2026-03-06T01:47:12.495967333Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 01:47:12.496269 containerd[1454]: time="2026-03-06T01:47:12.496207833Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 6 01:47:12.499386 containerd[1454]: time="2026-03-06T01:47:12.499233552Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 01:47:12.499722 containerd[1454]: time="2026-03-06T01:47:12.499613141Z" level=info msg="Start subscribing containerd event" Mar 6 01:47:12.499753 containerd[1454]: time="2026-03-06T01:47:12.499722586Z" level=info msg="Start recovering state" Mar 6 01:47:12.499908 containerd[1454]: time="2026-03-06T01:47:12.499819106Z" level=info msg="Start event monitor" Mar 6 01:47:12.499908 containerd[1454]: time="2026-03-06T01:47:12.499899897Z" level=info msg="Start snapshots syncer" Mar 6 01:47:12.499956 containerd[1454]: time="2026-03-06T01:47:12.499941905Z" level=info msg="Start cni network conf syncer for default" Mar 6 01:47:12.499975 containerd[1454]: time="2026-03-06T01:47:12.499959138Z" level=info msg="Start streaming server" Mar 6 01:47:12.505182 containerd[1454]: time="2026-03-06T01:47:12.502252322Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 01:47:12.505182 containerd[1454]: time="2026-03-06T01:47:12.504367299Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 01:47:12.505182 containerd[1454]: time="2026-03-06T01:47:12.504898090Z" level=info msg="containerd successfully booted in 0.066005s" Mar 6 01:47:12.504668 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 01:47:12.579943 systemd[1516]: Queued start job for default target default.target. Mar 6 01:47:12.594961 systemd[1516]: Created slice app.slice - User Application Slice. Mar 6 01:47:12.594992 systemd[1516]: Reached target paths.target - Paths. Mar 6 01:47:12.595006 systemd[1516]: Reached target timers.target - Timers. Mar 6 01:47:12.597166 systemd[1516]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 01:47:12.636422 systemd[1516]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 01:47:12.636565 systemd[1516]: Reached target sockets.target - Sockets. Mar 6 01:47:12.636581 systemd[1516]: Reached target basic.target - Basic System. Mar 6 01:47:12.636627 systemd[1516]: Reached target default.target - Main User Target. Mar 6 01:47:12.636671 systemd[1516]: Startup finished in 164ms. Mar 6 01:47:12.637008 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 01:47:12.645551 tar[1451]: linux-amd64/README.md Mar 6 01:47:12.655672 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 01:47:12.676346 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 01:47:12.728566 systemd[1]: Started sshd@1-10.0.0.156:22-10.0.0.1:52102.service - OpenSSH per-connection server daemon (10.0.0.1:52102). Mar 6 01:47:12.771585 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 52102 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:47:12.773472 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:47:12.780085 systemd-logind[1442]: New session 2 of user core. Mar 6 01:47:12.790278 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 01:47:12.852986 sshd[1534]: pam_unix(sshd:session): session closed for user core Mar 6 01:47:12.858492 systemd-networkd[1374]: eth0: Gained IPv6LL Mar 6 01:47:12.863220 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 01:47:12.867962 systemd[1]: sshd@1-10.0.0.156:22-10.0.0.1:52102.service: Deactivated successfully. Mar 6 01:47:12.870415 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 01:47:12.871303 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Mar 6 01:47:12.873942 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 01:47:12.895706 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 01:47:12.900961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:47:12.906044 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 01:47:12.911609 systemd[1]: Started sshd@2-10.0.0.156:22-10.0.0.1:52118.service - OpenSSH per-connection server daemon (10.0.0.1:52118). Mar 6 01:47:12.925572 systemd-logind[1442]: Removed session 2. Mar 6 01:47:12.946018 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 01:47:12.946501 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 01:47:12.950788 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 01:47:12.955470 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 01:47:12.967710 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 52118 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:47:12.969592 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:47:12.975634 systemd-logind[1442]: New session 3 of user core. Mar 6 01:47:12.983364 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 01:47:13.046372 sshd[1546]: pam_unix(sshd:session): session closed for user core Mar 6 01:47:13.052271 systemd[1]: sshd@2-10.0.0.156:22-10.0.0.1:52118.service: Deactivated successfully. Mar 6 01:47:13.054597 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 01:47:13.055646 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Mar 6 01:47:13.057422 systemd-logind[1442]: Removed session 3. Mar 6 01:47:13.736248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:47:13.740385 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 01:47:13.742439 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:47:13.745592 systemd[1]: Startup finished in 2.475s (kernel) + 10.478s (initrd) + 7.862s (userspace) = 20.817s. Mar 6 01:47:14.186914 kubelet[1570]: E0306 01:47:14.186628 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:47:14.190540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:47:14.190818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:47:14.191344 systemd[1]: kubelet.service: Consumed 1.009s CPU time. Mar 6 01:47:23.061568 systemd[1]: Started sshd@3-10.0.0.156:22-10.0.0.1:37656.service - OpenSSH per-connection server daemon (10.0.0.1:37656). Mar 6 01:47:23.097859 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 37656 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:47:23.100183 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:47:23.106075 systemd-logind[1442]: New session 4 of user core. Mar 6 01:47:23.124376 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 01:47:23.186486 sshd[1583]: pam_unix(sshd:session): session closed for user core Mar 6 01:47:23.205618 systemd[1]: sshd@3-10.0.0.156:22-10.0.0.1:37656.service: Deactivated successfully. Mar 6 01:47:23.207983 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 01:47:23.209937 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Mar 6 01:47:23.227446 systemd[1]: Started sshd@4-10.0.0.156:22-10.0.0.1:37666.service - OpenSSH per-connection server daemon (10.0.0.1:37666). Mar 6 01:47:23.228654 systemd-logind[1442]: Removed session 4. Mar 6 01:47:23.255851 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 37666 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:47:23.257655 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:47:23.263543 systemd-logind[1442]: New session 5 of user core. Mar 6 01:47:23.276273 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 01:47:23.328680 sshd[1590]: pam_unix(sshd:session): session closed for user core Mar 6 01:47:23.335954 systemd[1]: sshd@4-10.0.0.156:22-10.0.0.1:37666.service: Deactivated successfully. Mar 6 01:47:23.337760 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 01:47:23.339426 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Mar 6 01:47:23.351425 systemd[1]: Started sshd@5-10.0.0.156:22-10.0.0.1:37682.service - OpenSSH per-connection server daemon (10.0.0.1:37682). Mar 6 01:47:23.352625 systemd-logind[1442]: Removed session 5. Mar 6 01:47:23.380160 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 37682 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:47:23.381845 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:47:23.387159 systemd-logind[1442]: New session 6 of user core. Mar 6 01:47:23.397294 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 01:47:23.455546 sshd[1597]: pam_unix(sshd:session): session closed for user core Mar 6 01:47:23.469042 systemd[1]: sshd@5-10.0.0.156:22-10.0.0.1:37682.service: Deactivated successfully. Mar 6 01:47:23.470811 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 01:47:23.472652 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Mar 6 01:47:23.485433 systemd[1]: Started sshd@6-10.0.0.156:22-10.0.0.1:37694.service - OpenSSH per-connection server daemon (10.0.0.1:37694). Mar 6 01:47:23.486668 systemd-logind[1442]: Removed session 6. Mar 6 01:47:23.513662 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 37694 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:47:23.515581 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:47:23.520829 systemd-logind[1442]: New session 7 of user core. Mar 6 01:47:23.535291 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 01:47:23.599012 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 01:47:23.599587 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:47:23.621435 sudo[1607]: pam_unix(sudo:session): session closed for user root Mar 6 01:47:23.623782 sshd[1604]: pam_unix(sshd:session): session closed for user core Mar 6 01:47:23.632330 systemd[1]: sshd@6-10.0.0.156:22-10.0.0.1:37694.service: Deactivated successfully. Mar 6 01:47:23.634459 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 01:47:23.636197 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Mar 6 01:47:23.637794 systemd[1]: Started sshd@7-10.0.0.156:22-10.0.0.1:37704.service - OpenSSH per-connection server daemon (10.0.0.1:37704). Mar 6 01:47:23.638906 systemd-logind[1442]: Removed session 7. Mar 6 01:47:23.671758 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 37704 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:47:23.673699 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:47:23.679251 systemd-logind[1442]: New session 8 of user core. Mar 6 01:47:23.699452 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 01:47:23.760015 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 01:47:23.760522 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:47:23.766214 sudo[1616]: pam_unix(sudo:session): session closed for user root Mar 6 01:47:23.776465 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 6 01:47:23.776950 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:47:23.798495 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 6 01:47:23.801367 auditctl[1619]: No rules Mar 6 01:47:23.802766 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 01:47:23.803203 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 6 01:47:23.805562 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:47:23.847454 augenrules[1637]: No rules Mar 6 01:47:23.849318 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:47:23.850646 sudo[1615]: pam_unix(sudo:session): session closed for user root Mar 6 01:47:23.852689 sshd[1612]: pam_unix(sshd:session): session closed for user core Mar 6 01:47:23.869323 systemd[1]: sshd@7-10.0.0.156:22-10.0.0.1:37704.service: Deactivated successfully. Mar 6 01:47:23.871356 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 01:47:23.873185 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Mar 6 01:47:23.880468 systemd[1]: Started sshd@8-10.0.0.156:22-10.0.0.1:37710.service - OpenSSH per-connection server daemon (10.0.0.1:37710). Mar 6 01:47:23.881722 systemd-logind[1442]: Removed session 8. Mar 6 01:47:23.907844 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 37710 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:47:23.909455 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:47:23.914553 systemd-logind[1442]: New session 9 of user core. Mar 6 01:47:23.928281 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 01:47:23.986408 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 01:47:23.986839 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:47:24.233428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 01:47:24.250412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:47:24.323480 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 01:47:24.323947 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 01:47:24.413576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:47:24.419792 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:47:24.469284 kubelet[1675]: E0306 01:47:24.469222 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:47:24.474347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:47:24.474593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:47:24.643106 dockerd[1669]: time="2026-03-06T01:47:24.642848432Z" level=info msg="Starting up" Mar 6 01:47:24.787928 systemd[1]: var-lib-docker-metacopy\x2dcheck3905083064-merged.mount: Deactivated successfully. Mar 6 01:47:24.820484 dockerd[1669]: time="2026-03-06T01:47:24.820370979Z" level=info msg="Loading containers: start." Mar 6 01:47:24.968211 kernel: Initializing XFRM netlink socket Mar 6 01:47:25.076427 systemd-networkd[1374]: docker0: Link UP Mar 6 01:47:25.103721 dockerd[1669]: time="2026-03-06T01:47:25.103550826Z" level=info msg="Loading containers: done." Mar 6 01:47:25.124856 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3911554298-merged.mount: Deactivated successfully. Mar 6 01:47:25.125398 dockerd[1669]: time="2026-03-06T01:47:25.125292473Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 01:47:25.125509 dockerd[1669]: time="2026-03-06T01:47:25.125464123Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 6 01:47:25.125721 dockerd[1669]: time="2026-03-06T01:47:25.125649108Z" level=info msg="Daemon has completed initialization" Mar 6 01:47:25.182326 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 01:47:25.182648 dockerd[1669]: time="2026-03-06T01:47:25.182407492Z" level=info msg="API listen on /run/docker.sock" Mar 6 01:47:25.612907 containerd[1454]: time="2026-03-06T01:47:25.612804256Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 6 01:47:26.162417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858598377.mount: Deactivated successfully. Mar 6 01:47:27.095916 containerd[1454]: time="2026-03-06T01:47:27.095807273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:27.097050 containerd[1454]: time="2026-03-06T01:47:27.096934194Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 6 01:47:27.098492 containerd[1454]: time="2026-03-06T01:47:27.098435911Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:27.101838 containerd[1454]: time="2026-03-06T01:47:27.101762371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:27.103094 containerd[1454]: time="2026-03-06T01:47:27.103026181Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 1.490178141s" Mar 6 01:47:27.103094 containerd[1454]: time="2026-03-06T01:47:27.103082315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 6 01:47:27.103809 containerd[1454]: time="2026-03-06T01:47:27.103712702Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 6 01:47:28.084606 containerd[1454]: time="2026-03-06T01:47:28.084516736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:28.086262 containerd[1454]: time="2026-03-06T01:47:28.086084385Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 6 01:47:28.088028 containerd[1454]: time="2026-03-06T01:47:28.087799651Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:28.092109 containerd[1454]: time="2026-03-06T01:47:28.092023291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:28.093757 containerd[1454]: time="2026-03-06T01:47:28.093649587Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 989.891741ms" Mar 6 01:47:28.093757 containerd[1454]: time="2026-03-06T01:47:28.093701454Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 6 01:47:28.094388 containerd[1454]: time="2026-03-06T01:47:28.094362369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 6 01:47:28.966066 containerd[1454]: time="2026-03-06T01:47:28.965985631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:28.966982 containerd[1454]: time="2026-03-06T01:47:28.966905500Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 6 01:47:28.968229 containerd[1454]: time="2026-03-06T01:47:28.968109007Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:28.971313 containerd[1454]: time="2026-03-06T01:47:28.971241446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:28.972560 containerd[1454]: time="2026-03-06T01:47:28.972495258Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 878.003516ms" Mar 6 01:47:28.972560 containerd[1454]: time="2026-03-06T01:47:28.972542205Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 6 01:47:28.973311 containerd[1454]: time="2026-03-06T01:47:28.973208890Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 6 01:47:32.867826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159003194.mount: Deactivated successfully. Mar 6 01:47:33.322011 containerd[1454]: time="2026-03-06T01:47:33.321384702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:33.322011 containerd[1454]: time="2026-03-06T01:47:33.321736113Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 6 01:47:33.324638 containerd[1454]: time="2026-03-06T01:47:33.323389971Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:33.326242 containerd[1454]: time="2026-03-06T01:47:33.326182766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:33.327006 containerd[1454]: time="2026-03-06T01:47:33.326918967Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 4.353607495s" Mar 6 01:47:33.327048 containerd[1454]: time="2026-03-06T01:47:33.327033190Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 6 01:47:33.329853 containerd[1454]: time="2026-03-06T01:47:33.329678715Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 6 01:47:33.901014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505921952.mount: Deactivated successfully. Mar 6 01:47:34.485802 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 01:47:34.509651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:47:34.993546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:47:34.995616 (kubelet)[1967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:47:35.124850 kubelet[1967]: E0306 01:47:35.124567 1967 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:47:35.135032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:47:35.135808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:47:35.491989 containerd[1454]: time="2026-03-06T01:47:35.491666520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:35.493247 containerd[1454]: time="2026-03-06T01:47:35.492581499Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 6 01:47:35.494331 containerd[1454]: time="2026-03-06T01:47:35.494266335Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:35.497880 containerd[1454]: time="2026-03-06T01:47:35.497786451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:35.499796 containerd[1454]: time="2026-03-06T01:47:35.499677620Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.169945785s" Mar 6 01:47:35.499796 containerd[1454]: time="2026-03-06T01:47:35.499767708Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 6 01:47:35.501052 containerd[1454]: time="2026-03-06T01:47:35.500980596Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 6 01:47:36.008973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount432012259.mount: Deactivated successfully. Mar 6 01:47:36.017618 containerd[1454]: time="2026-03-06T01:47:36.017513376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:36.018676 containerd[1454]: time="2026-03-06T01:47:36.018475588Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 6 01:47:36.019832 containerd[1454]: time="2026-03-06T01:47:36.019775150Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:36.022861 containerd[1454]: time="2026-03-06T01:47:36.022670181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:36.024002 containerd[1454]: time="2026-03-06T01:47:36.023879373Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 522.868179ms" Mar 6 01:47:36.024002 containerd[1454]: time="2026-03-06T01:47:36.023989568Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 6 01:47:36.024836 containerd[1454]: time="2026-03-06T01:47:36.024748191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 6 01:47:36.603301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1449861796.mount: Deactivated successfully. Mar 6 01:47:38.021380 containerd[1454]: time="2026-03-06T01:47:38.020620086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:38.021380 containerd[1454]: time="2026-03-06T01:47:38.021463311Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 6 01:47:38.023752 containerd[1454]: time="2026-03-06T01:47:38.022876850Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:38.026268 containerd[1454]: time="2026-03-06T01:47:38.026200071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:38.027685 containerd[1454]: time="2026-03-06T01:47:38.027613961Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.00280674s" Mar 6 01:47:38.027685 containerd[1454]: time="2026-03-06T01:47:38.027670247Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 6 01:47:39.557033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:47:39.574475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:47:39.608699 systemd[1]: Reloading requested from client PID 2072 ('systemctl') (unit session-9.scope)... Mar 6 01:47:39.608761 systemd[1]: Reloading... Mar 6 01:47:39.717197 zram_generator::config[2111]: No configuration found. Mar 6 01:47:39.838935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:47:39.919325 systemd[1]: Reloading finished in 309 ms. Mar 6 01:47:39.984339 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:47:39.989274 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 01:47:39.989586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:47:39.991852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:47:40.154610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:47:40.160303 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:47:40.207984 kubelet[2161]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:47:40.330709 kubelet[2161]: I0306 01:47:40.330605 2161 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 6 01:47:40.330709 kubelet[2161]: I0306 01:47:40.330692 2161 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:47:40.330709 kubelet[2161]: I0306 01:47:40.330721 2161 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 01:47:40.330963 kubelet[2161]: I0306 01:47:40.330732 2161 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:47:40.331209 kubelet[2161]: I0306 01:47:40.331098 2161 server.go:951] "Client rotation is on, will bootstrap in background" Mar 6 01:47:40.338379 kubelet[2161]: E0306 01:47:40.338319 2161 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.156:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.156:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:47:40.341765 kubelet[2161]: I0306 01:47:40.341684 2161 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:47:40.346920 kubelet[2161]: E0306 01:47:40.346793 2161 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:47:40.346986 kubelet[2161]: I0306 01:47:40.346935 2161 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 6 01:47:40.357687 kubelet[2161]: I0306 01:47:40.357616 2161 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 01:47:40.358763 kubelet[2161]: I0306 01:47:40.358668 2161 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:47:40.359008 kubelet[2161]: I0306 01:47:40.358726 2161 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:47:40.359008 kubelet[2161]: I0306 01:47:40.358980 2161 topology_manager.go:143] "Creating topology manager with none policy" Mar 6 01:47:40.359008 kubelet[2161]: I0306 01:47:40.358993 2161 container_manager_linux.go:308] "Creating device plugin manager" Mar 6 01:47:40.359647 kubelet[2161]: I0306 01:47:40.359108 2161 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 01:47:40.362229 kubelet[2161]: I0306 01:47:40.362101 2161 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 6 01:47:40.362675 kubelet[2161]: I0306 01:47:40.362601 2161 kubelet.go:482] "Attempting to sync node with API server" Mar 6 01:47:40.362675 kubelet[2161]: I0306 01:47:40.362641 2161 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:47:40.362790 kubelet[2161]: I0306 01:47:40.362702 2161 kubelet.go:394] "Adding apiserver pod source" Mar 6 01:47:40.362790 kubelet[2161]: I0306 01:47:40.362717 2161 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:47:40.366926 kubelet[2161]: I0306 01:47:40.366850 2161 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:47:40.372079 kubelet[2161]: I0306 01:47:40.371966 2161 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:47:40.372079 kubelet[2161]: I0306 01:47:40.372033 2161 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 01:47:40.372269 kubelet[2161]: W0306 01:47:40.372208 2161 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 01:47:40.381797 kubelet[2161]: I0306 01:47:40.381688 2161 server.go:1257] "Started kubelet" Mar 6 01:47:40.385829 kubelet[2161]: I0306 01:47:40.382283 2161 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:47:40.385829 kubelet[2161]: I0306 01:47:40.385831 2161 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 01:47:40.386938 kubelet[2161]: I0306 01:47:40.386857 2161 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:47:40.387438 kubelet[2161]: I0306 01:47:40.387398 2161 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:47:40.391223 kubelet[2161]: I0306 01:47:40.391178 2161 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:47:40.393056 kubelet[2161]: E0306 01:47:40.390520 2161 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.156:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.156:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1d566e799b9e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:47:40.381567902 +0000 UTC m=+0.216717732,LastTimestamp:2026-03-06 01:47:40.381567902 +0000 UTC m=+0.216717732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:47:40.395590 kubelet[2161]: I0306 01:47:40.395558 2161 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 6 01:47:40.396232 kubelet[2161]: E0306 01:47:40.396184 2161 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:47:40.398177 kubelet[2161]: I0306 01:47:40.396517 2161 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:47:40.398177 kubelet[2161]: I0306 01:47:40.396612 2161 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 6 01:47:40.398177 kubelet[2161]: E0306 01:47:40.397086 2161 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:47:40.398177 kubelet[2161]: I0306 01:47:40.398073 2161 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 01:47:40.398375 kubelet[2161]: I0306 01:47:40.398257 2161 reconciler.go:29] "Reconciler: start to sync state" Mar 6 01:47:40.399939 kubelet[2161]: E0306 01:47:40.399782 2161 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="200ms" Mar 6 01:47:40.402386 kubelet[2161]: I0306 01:47:40.402308 2161 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:47:40.403812 kubelet[2161]: I0306 01:47:40.403782 2161 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:47:40.403812 kubelet[2161]: I0306 01:47:40.403797 2161 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:47:40.410203 kubelet[2161]: I0306 01:47:40.409952 2161 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 01:47:40.424219 kubelet[2161]: I0306 01:47:40.423245 2161 cpu_manager.go:225] "Starting" policy="none" Mar 6 01:47:40.424219 kubelet[2161]: I0306 01:47:40.423262 2161 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 6 01:47:40.424219 kubelet[2161]: I0306 01:47:40.423304 2161 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 6 01:47:40.425784 kubelet[2161]: I0306 01:47:40.425755 2161 policy_none.go:50] "Start" Mar 6 01:47:40.425882 kubelet[2161]: I0306 01:47:40.425790 2161 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 01:47:40.425882 kubelet[2161]: I0306 01:47:40.425804 2161 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 01:47:40.427535 kubelet[2161]: I0306 01:47:40.427469 2161 policy_none.go:44] "Start" Mar 6 01:47:40.437340 kubelet[2161]: I0306 01:47:40.437302 2161 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 01:47:40.437340 kubelet[2161]: I0306 01:47:40.437337 2161 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 6 01:47:40.437441 kubelet[2161]: I0306 01:47:40.437356 2161 kubelet.go:2501] "Starting kubelet main sync loop" Mar 6 01:47:40.437441 kubelet[2161]: E0306 01:47:40.437406 2161 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:47:40.438726 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 01:47:40.459731 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 01:47:40.463839 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 01:47:40.479271 kubelet[2161]: E0306 01:47:40.479221 2161 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:47:40.479506 kubelet[2161]: I0306 01:47:40.479454 2161 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 6 01:47:40.479585 kubelet[2161]: I0306 01:47:40.479491 2161 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:47:40.479850 kubelet[2161]: I0306 01:47:40.479739 2161 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 6 01:47:40.481050 kubelet[2161]: E0306 01:47:40.480949 2161 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:47:40.481050 kubelet[2161]: E0306 01:47:40.480984 2161 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 01:47:40.554420 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 6 01:47:40.577173 kubelet[2161]: E0306 01:47:40.576876 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:47:40.581704 kubelet[2161]: I0306 01:47:40.581662 2161 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:47:40.582029 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 6 01:47:40.583285 kubelet[2161]: E0306 01:47:40.582226 2161 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Mar 6 01:47:40.584778 kubelet[2161]: E0306 01:47:40.584714 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:47:40.587413 systemd[1]: Created slice kubepods-burstable-podb99a88ec606ab8625ea34a43af191ec5.slice - libcontainer container kubepods-burstable-podb99a88ec606ab8625ea34a43af191ec5.slice. Mar 6 01:47:40.590053 kubelet[2161]: E0306 01:47:40.589984 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:47:40.601007 kubelet[2161]: E0306 01:47:40.600949 2161 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="400ms" Mar 6 01:47:40.700075 kubelet[2161]: I0306 01:47:40.699913 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:40.700075 kubelet[2161]: I0306 01:47:40.699957 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:40.700075 kubelet[2161]: I0306 01:47:40.699981 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:47:40.700075 kubelet[2161]: I0306 01:47:40.699996 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b99a88ec606ab8625ea34a43af191ec5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b99a88ec606ab8625ea34a43af191ec5\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:40.700075 kubelet[2161]: I0306 01:47:40.700009 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b99a88ec606ab8625ea34a43af191ec5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b99a88ec606ab8625ea34a43af191ec5\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:40.700427 kubelet[2161]: I0306 01:47:40.700023 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:40.700427 kubelet[2161]: I0306 01:47:40.700089 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:40.700427 kubelet[2161]: I0306 01:47:40.700203 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:40.700427 kubelet[2161]: I0306 01:47:40.700337 2161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b99a88ec606ab8625ea34a43af191ec5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b99a88ec606ab8625ea34a43af191ec5\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:40.785030 kubelet[2161]: I0306 01:47:40.784872 2161 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:47:40.785611 kubelet[2161]: E0306 01:47:40.785446 2161 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Mar 6 01:47:40.883312 kubelet[2161]: E0306 01:47:40.883239 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:40.884641 containerd[1454]: time="2026-03-06T01:47:40.884472430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 6 01:47:40.888315 kubelet[2161]: E0306 01:47:40.888265 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:40.888766 containerd[1454]: time="2026-03-06T01:47:40.888712236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 6 01:47:40.893452 kubelet[2161]: E0306 01:47:40.893381 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:40.893933 containerd[1454]: time="2026-03-06T01:47:40.893866923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b99a88ec606ab8625ea34a43af191ec5,Namespace:kube-system,Attempt:0,}" Mar 6 01:47:41.002557 kubelet[2161]: E0306 01:47:41.002440 2161 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="800ms" Mar 6 01:47:41.187964 kubelet[2161]: I0306 01:47:41.187827 2161 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:47:41.188271 kubelet[2161]: E0306 01:47:41.188235 2161 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.156:6443/api/v1/nodes\": dial tcp 10.0.0.156:6443: connect: connection refused" node="localhost" Mar 6 01:47:41.355362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541345689.mount: Deactivated successfully. Mar 6 01:47:41.363555 containerd[1454]: time="2026-03-06T01:47:41.363387861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:47:41.367279 containerd[1454]: time="2026-03-06T01:47:41.367088982Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 6 01:47:41.368245 containerd[1454]: time="2026-03-06T01:47:41.368171202Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:47:41.369430 containerd[1454]: time="2026-03-06T01:47:41.369360684Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:47:41.370860 containerd[1454]: time="2026-03-06T01:47:41.370791446Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:47:41.371688 containerd[1454]: time="2026-03-06T01:47:41.371629901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:47:41.372832 containerd[1454]: time="2026-03-06T01:47:41.372734815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:47:41.378109 containerd[1454]: time="2026-03-06T01:47:41.378047145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:47:41.381298 containerd[1454]: time="2026-03-06T01:47:41.381218206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.428986ms" Mar 6 01:47:41.382684 containerd[1454]: time="2026-03-06T01:47:41.382609549Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.979785ms" Mar 6 01:47:41.383596 containerd[1454]: time="2026-03-06T01:47:41.383461615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 489.497692ms" Mar 6 01:47:41.532682 containerd[1454]: time="2026-03-06T01:47:41.532524046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:47:41.532682 containerd[1454]: time="2026-03-06T01:47:41.532613323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:47:41.533634 containerd[1454]: time="2026-03-06T01:47:41.532645233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:47:41.533634 containerd[1454]: time="2026-03-06T01:47:41.532848963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:47:41.539273 containerd[1454]: time="2026-03-06T01:47:41.538828288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:47:41.539273 containerd[1454]: time="2026-03-06T01:47:41.539069088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:47:41.539273 containerd[1454]: time="2026-03-06T01:47:41.539084938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:47:41.541936 containerd[1454]: time="2026-03-06T01:47:41.541788125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:47:41.543646 containerd[1454]: time="2026-03-06T01:47:41.542439498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:47:41.543646 containerd[1454]: time="2026-03-06T01:47:41.542497095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:47:41.543646 containerd[1454]: time="2026-03-06T01:47:41.542515459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:47:41.543646 containerd[1454]: time="2026-03-06T01:47:41.542605557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:47:41.569387 systemd[1]: Started cri-containerd-0e9179381a5d0c550fee543f42ba86b18d2621bab568c47d7e10a3b2ca79a3fa.scope - libcontainer container 0e9179381a5d0c550fee543f42ba86b18d2621bab568c47d7e10a3b2ca79a3fa. Mar 6 01:47:41.574374 systemd[1]: Started cri-containerd-afa7353ef9106270edd26f9e95a72169a0e84f48f534491dad2f532616599c56.scope - libcontainer container afa7353ef9106270edd26f9e95a72169a0e84f48f534491dad2f532616599c56. Mar 6 01:47:41.578778 systemd[1]: Started cri-containerd-502df44cfc03b29641d4b9dfe83a8c7bc8fb412e90dd4dcb273b1d37391941e6.scope - libcontainer container 502df44cfc03b29641d4b9dfe83a8c7bc8fb412e90dd4dcb273b1d37391941e6. Mar 6 01:47:41.635014 containerd[1454]: time="2026-03-06T01:47:41.634429339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e9179381a5d0c550fee543f42ba86b18d2621bab568c47d7e10a3b2ca79a3fa\"" Mar 6 01:47:41.647275 containerd[1454]: time="2026-03-06T01:47:41.646764683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b99a88ec606ab8625ea34a43af191ec5,Namespace:kube-system,Attempt:0,} returns sandbox id \"afa7353ef9106270edd26f9e95a72169a0e84f48f534491dad2f532616599c56\"" Mar 6 01:47:41.648979 kubelet[2161]: E0306 01:47:41.648868 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:41.649633 containerd[1454]: time="2026-03-06T01:47:41.649562473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"502df44cfc03b29641d4b9dfe83a8c7bc8fb412e90dd4dcb273b1d37391941e6\"" Mar 6 01:47:41.650036 kubelet[2161]: E0306 01:47:41.649613 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:41.654240 kubelet[2161]: E0306 01:47:41.654203 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:41.657189 containerd[1454]: time="2026-03-06T01:47:41.657099998Z" level=info msg="CreateContainer within sandbox \"afa7353ef9106270edd26f9e95a72169a0e84f48f534491dad2f532616599c56\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 01:47:41.661324 containerd[1454]: time="2026-03-06T01:47:41.661262481Z" level=info msg="CreateContainer within sandbox \"0e9179381a5d0c550fee543f42ba86b18d2621bab568c47d7e10a3b2ca79a3fa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 01:47:41.664738 containerd[1454]: time="2026-03-06T01:47:41.664622990Z" level=info msg="CreateContainer within sandbox \"502df44cfc03b29641d4b9dfe83a8c7bc8fb412e90dd4dcb273b1d37391941e6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 01:47:41.683252 containerd[1454]: time="2026-03-06T01:47:41.683210796Z" level=info msg="CreateContainer within sandbox \"afa7353ef9106270edd26f9e95a72169a0e84f48f534491dad2f532616599c56\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9930a4321347388a66b565a34872ae42c6f9db2df1ce7b283da4f033d65e68cd\"" Mar 6 01:47:41.684616 containerd[1454]: time="2026-03-06T01:47:41.684569678Z" level=info msg="StartContainer for \"9930a4321347388a66b565a34872ae42c6f9db2df1ce7b283da4f033d65e68cd\"" Mar 6 01:47:41.690195 containerd[1454]: time="2026-03-06T01:47:41.690098493Z" level=info msg="CreateContainer within sandbox \"0e9179381a5d0c550fee543f42ba86b18d2621bab568c47d7e10a3b2ca79a3fa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4fc2c447b2be5397ad4886b1be492a75265377e37ac9415829e1acbb47f55de2\"" Mar 6 01:47:41.691046 containerd[1454]: time="2026-03-06T01:47:41.690987352Z" level=info msg="StartContainer for \"4fc2c447b2be5397ad4886b1be492a75265377e37ac9415829e1acbb47f55de2\"" Mar 6 01:47:41.695854 containerd[1454]: time="2026-03-06T01:47:41.695825932Z" level=info msg="CreateContainer within sandbox \"502df44cfc03b29641d4b9dfe83a8c7bc8fb412e90dd4dcb273b1d37391941e6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5e368c1cf3501794005f652536e5d0532ec22ddfb666d3eb445982e2b1a2e284\"" Mar 6 01:47:41.696866 containerd[1454]: time="2026-03-06T01:47:41.696652652Z" level=info msg="StartContainer for \"5e368c1cf3501794005f652536e5d0532ec22ddfb666d3eb445982e2b1a2e284\"" Mar 6 01:47:41.731344 systemd[1]: Started cri-containerd-9930a4321347388a66b565a34872ae42c6f9db2df1ce7b283da4f033d65e68cd.scope - libcontainer container 9930a4321347388a66b565a34872ae42c6f9db2df1ce7b283da4f033d65e68cd. Mar 6 01:47:41.737332 systemd[1]: Started cri-containerd-4fc2c447b2be5397ad4886b1be492a75265377e37ac9415829e1acbb47f55de2.scope - libcontainer container 4fc2c447b2be5397ad4886b1be492a75265377e37ac9415829e1acbb47f55de2. Mar 6 01:47:41.740021 systemd[1]: Started cri-containerd-5e368c1cf3501794005f652536e5d0532ec22ddfb666d3eb445982e2b1a2e284.scope - libcontainer container 5e368c1cf3501794005f652536e5d0532ec22ddfb666d3eb445982e2b1a2e284. Mar 6 01:47:41.787228 containerd[1454]: time="2026-03-06T01:47:41.787172419Z" level=info msg="StartContainer for \"9930a4321347388a66b565a34872ae42c6f9db2df1ce7b283da4f033d65e68cd\" returns successfully" Mar 6 01:47:41.804068 kubelet[2161]: E0306 01:47:41.804031 2161 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.156:6443: connect: connection refused" interval="1.6s" Mar 6 01:47:41.806706 containerd[1454]: time="2026-03-06T01:47:41.806534790Z" level=info msg="StartContainer for \"5e368c1cf3501794005f652536e5d0532ec22ddfb666d3eb445982e2b1a2e284\" returns successfully" Mar 6 01:47:41.806706 containerd[1454]: time="2026-03-06T01:47:41.806612715Z" level=info msg="StartContainer for \"4fc2c447b2be5397ad4886b1be492a75265377e37ac9415829e1acbb47f55de2\" returns successfully" Mar 6 01:47:41.993486 kubelet[2161]: I0306 01:47:41.993384 2161 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:47:42.448235 kubelet[2161]: E0306 01:47:42.447967 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:47:42.448506 kubelet[2161]: E0306 01:47:42.448281 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:42.449841 kubelet[2161]: E0306 01:47:42.449755 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:47:42.450258 kubelet[2161]: E0306 01:47:42.449999 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:42.453705 kubelet[2161]: E0306 01:47:42.453355 2161 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:47:42.453705 kubelet[2161]: E0306 01:47:42.453524 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:43.027221 kubelet[2161]: I0306 01:47:43.027085 2161 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 6 01:47:43.098229 kubelet[2161]: I0306 01:47:43.098192 2161 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:43.105850 kubelet[2161]: E0306 01:47:43.105737 2161 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:43.105850 kubelet[2161]: I0306 01:47:43.105830 2161 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:47:43.108002 kubelet[2161]: E0306 01:47:43.107937 2161 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 01:47:43.108002 kubelet[2161]: I0306 01:47:43.107981 2161 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:43.110420 kubelet[2161]: E0306 01:47:43.110367 2161 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:43.367109 kubelet[2161]: I0306 01:47:43.366495 2161 apiserver.go:52] "Watching apiserver" Mar 6 01:47:43.399277 kubelet[2161]: I0306 01:47:43.399204 2161 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 01:47:43.454055 kubelet[2161]: I0306 01:47:43.453976 2161 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:43.455606 kubelet[2161]: I0306 01:47:43.454617 2161 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:47:43.455812 kubelet[2161]: E0306 01:47:43.455756 2161 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:43.456025 kubelet[2161]: E0306 01:47:43.455951 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:43.456594 kubelet[2161]: E0306 01:47:43.456557 2161 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 01:47:43.456862 kubelet[2161]: E0306 01:47:43.456741 2161 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:45.121716 systemd[1]: Reloading requested from client PID 2453 ('systemctl') (unit session-9.scope)... Mar 6 01:47:45.121754 systemd[1]: Reloading... Mar 6 01:47:45.378373 zram_generator::config[2492]: No configuration found. Mar 6 01:47:45.502081 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:47:45.614068 systemd[1]: Reloading finished in 491 ms. Mar 6 01:47:45.674370 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:47:45.674730 kubelet[2161]: I0306 01:47:45.674355 2161 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:47:45.686645 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 01:47:45.686988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:47:45.700503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:47:45.868777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:47:45.881627 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:47:45.956913 kubelet[2537]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:47:45.968178 kubelet[2537]: I0306 01:47:45.968085 2537 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 6 01:47:45.968555 kubelet[2537]: I0306 01:47:45.968482 2537 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:47:45.968555 kubelet[2537]: I0306 01:47:45.968534 2537 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 01:47:45.968555 kubelet[2537]: I0306 01:47:45.968542 2537 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:47:45.968861 kubelet[2537]: I0306 01:47:45.968809 2537 server.go:951] "Client rotation is on, will bootstrap in background" Mar 6 01:47:45.970283 kubelet[2537]: I0306 01:47:45.970217 2537 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 01:47:45.975951 kubelet[2537]: I0306 01:47:45.975760 2537 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:47:45.979641 kubelet[2537]: E0306 01:47:45.979555 2537 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:47:45.979641 kubelet[2537]: I0306 01:47:45.979614 2537 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 6 01:47:45.986668 kubelet[2537]: I0306 01:47:45.986596 2537 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 01:47:45.986990 kubelet[2537]: I0306 01:47:45.986901 2537 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:47:45.987086 kubelet[2537]: I0306 01:47:45.986941 2537 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 01:47:45.987086 kubelet[2537]: I0306 01:47:45.987085 2537 topology_manager.go:143] "Creating topology manager with none policy" Mar 6 01:47:45.987388 kubelet[2537]: I0306 01:47:45.987094 2537 container_manager_linux.go:308] "Creating device plugin manager" Mar 6 01:47:45.987388 kubelet[2537]: I0306 01:47:45.987115 2537 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 01:47:45.987388 kubelet[2537]: I0306 01:47:45.987342 2537 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 6 01:47:45.987672 kubelet[2537]: I0306 01:47:45.987606 2537 kubelet.go:482] "Attempting to sync node with API server" Mar 6 01:47:45.987672 kubelet[2537]: I0306 01:47:45.987633 2537 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:47:45.987672 kubelet[2537]: I0306 01:47:45.987649 2537 kubelet.go:394] "Adding apiserver pod source" Mar 6 01:47:45.987672 kubelet[2537]: I0306 01:47:45.987657 2537 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:47:45.989912 kubelet[2537]: I0306 01:47:45.989749 2537 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:47:45.992894 kubelet[2537]: I0306 01:47:45.992805 2537 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:47:45.993094 kubelet[2537]: I0306 01:47:45.992992 2537 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 01:47:46.001613 kubelet[2537]: I0306 01:47:46.001363 2537 server.go:1257] "Started kubelet" Mar 6 01:47:46.004983 kubelet[2537]: I0306 01:47:46.004907 2537 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:47:46.005050 kubelet[2537]: I0306 01:47:46.004987 2537 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 01:47:46.005408 kubelet[2537]: I0306 01:47:46.005379 2537 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:47:46.005852 kubelet[2537]: I0306 01:47:46.005838 2537 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:47:46.008774 kubelet[2537]: I0306 01:47:46.008729 2537 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:47:46.009583 kubelet[2537]: I0306 01:47:46.009531 2537 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 6 01:47:46.010831 kubelet[2537]: I0306 01:47:46.010765 2537 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:47:46.012042 kubelet[2537]: I0306 01:47:46.011852 2537 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 6 01:47:46.012042 kubelet[2537]: I0306 01:47:46.011953 2537 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 01:47:46.012113 kubelet[2537]: I0306 01:47:46.012066 2537 reconciler.go:29] "Reconciler: start to sync state" Mar 6 01:47:46.014372 kubelet[2537]: E0306 01:47:46.014188 2537 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:47:46.016438 kubelet[2537]: I0306 01:47:46.015698 2537 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:47:46.016438 kubelet[2537]: I0306 01:47:46.015819 2537 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:47:46.019429 kubelet[2537]: I0306 01:47:46.019372 2537 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:47:46.033237 kubelet[2537]: I0306 01:47:46.033169 2537 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 01:47:46.035327 kubelet[2537]: I0306 01:47:46.035284 2537 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 01:47:46.035327 kubelet[2537]: I0306 01:47:46.035322 2537 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 6 01:47:46.035405 kubelet[2537]: I0306 01:47:46.035347 2537 kubelet.go:2501] "Starting kubelet main sync loop" Mar 6 01:47:46.035465 kubelet[2537]: E0306 01:47:46.035420 2537 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:47:46.079777 kubelet[2537]: I0306 01:47:46.079654 2537 cpu_manager.go:225] "Starting" policy="none" Mar 6 01:47:46.080352 kubelet[2537]: I0306 01:47:46.080031 2537 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 6 01:47:46.080352 kubelet[2537]: I0306 01:47:46.080054 2537 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 6 01:47:46.080352 kubelet[2537]: I0306 01:47:46.080317 2537 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 6 01:47:46.080352 kubelet[2537]: I0306 01:47:46.080332 2537 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 6 01:47:46.080352 kubelet[2537]: I0306 01:47:46.080349 2537 policy_none.go:50] "Start" Mar 6 01:47:46.080696 kubelet[2537]: I0306 01:47:46.080358 2537 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 01:47:46.080696 kubelet[2537]: I0306 01:47:46.080370 2537 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 01:47:46.080696 kubelet[2537]: I0306 01:47:46.080452 2537 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 6 01:47:46.080696 kubelet[2537]: I0306 01:47:46.080460 2537 policy_none.go:44] "Start" Mar 6 01:47:46.086382 kubelet[2537]: E0306 01:47:46.086267 2537 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:47:46.086671 kubelet[2537]: I0306 01:47:46.086604 2537 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 6 01:47:46.086704 kubelet[2537]: I0306 01:47:46.086652 2537 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:47:46.086990 kubelet[2537]: I0306 01:47:46.086935 2537 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 6 01:47:46.090824 kubelet[2537]: E0306 01:47:46.089080 2537 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:47:46.137250 kubelet[2537]: I0306 01:47:46.136834 2537 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:47:46.137250 kubelet[2537]: I0306 01:47:46.136854 2537 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:46.137250 kubelet[2537]: I0306 01:47:46.136854 2537 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:46.197940 kubelet[2537]: I0306 01:47:46.197268 2537 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 6 01:47:46.209568 kubelet[2537]: I0306 01:47:46.208504 2537 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 6 01:47:46.209568 kubelet[2537]: I0306 01:47:46.208576 2537 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 6 01:47:46.314262 kubelet[2537]: I0306 01:47:46.314057 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:46.314262 kubelet[2537]: I0306 01:47:46.314100 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:46.314262 kubelet[2537]: I0306 01:47:46.314165 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:47:46.314262 kubelet[2537]: I0306 01:47:46.314181 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:46.314262 kubelet[2537]: I0306 01:47:46.314199 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:46.314616 kubelet[2537]: I0306 01:47:46.314215 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:47:46.314616 kubelet[2537]: I0306 01:47:46.314228 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b99a88ec606ab8625ea34a43af191ec5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b99a88ec606ab8625ea34a43af191ec5\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:46.314616 kubelet[2537]: I0306 01:47:46.314240 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b99a88ec606ab8625ea34a43af191ec5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b99a88ec606ab8625ea34a43af191ec5\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:46.314616 kubelet[2537]: I0306 01:47:46.314254 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b99a88ec606ab8625ea34a43af191ec5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b99a88ec606ab8625ea34a43af191ec5\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:47:46.633796 kubelet[2537]: E0306 01:47:46.633036 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:46.638407 kubelet[2537]: E0306 01:47:46.638359 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:46.654543 kubelet[2537]: E0306 01:47:46.652858 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:46.989612 kubelet[2537]: I0306 01:47:46.989338 2537 apiserver.go:52] "Watching apiserver" Mar 6 01:47:47.012295 kubelet[2537]: I0306 01:47:47.012224 2537 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 01:47:47.056782 kubelet[2537]: E0306 01:47:47.056731 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:47.057062 kubelet[2537]: I0306 01:47:47.057027 2537 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:47:47.057667 kubelet[2537]: E0306 01:47:47.057618 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:47.065402 kubelet[2537]: E0306 01:47:47.065291 2537 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 01:47:47.065541 kubelet[2537]: E0306 01:47:47.065458 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:47.325847 kubelet[2537]: I0306 01:47:47.325229 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.325218893 podStartE2EDuration="1.325218893s" podCreationTimestamp="2026-03-06 01:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:47:47.323489597 +0000 UTC m=+1.433280713" watchObservedRunningTime="2026-03-06 01:47:47.325218893 +0000 UTC m=+1.435010009" Mar 6 01:47:47.344310 kubelet[2537]: I0306 01:47:47.344201 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.3441887160000001 podStartE2EDuration="1.344188716s" podCreationTimestamp="2026-03-06 01:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:47:47.334989175 +0000 UTC m=+1.444780291" watchObservedRunningTime="2026-03-06 01:47:47.344188716 +0000 UTC m=+1.453979832" Mar 6 01:47:47.344310 kubelet[2537]: I0306 01:47:47.344311 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.344306555 podStartE2EDuration="1.344306555s" podCreationTimestamp="2026-03-06 01:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:47:47.343206258 +0000 UTC m=+1.452997384" watchObservedRunningTime="2026-03-06 01:47:47.344306555 +0000 UTC m=+1.454097671" Mar 6 01:47:48.060771 kubelet[2537]: E0306 01:47:48.059632 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:48.060771 kubelet[2537]: E0306 01:47:48.059674 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:48.060771 kubelet[2537]: E0306 01:47:48.059764 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:49.069529 kubelet[2537]: E0306 01:47:49.068922 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:49.076305 kubelet[2537]: E0306 01:47:49.070013 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:50.025797 kubelet[2537]: I0306 01:47:50.025506 2537 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 01:47:50.028976 containerd[1454]: time="2026-03-06T01:47:50.027317042Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 01:47:50.030602 kubelet[2537]: I0306 01:47:50.030360 2537 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 01:47:50.069938 kubelet[2537]: E0306 01:47:50.069823 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:51.089292 kubelet[2537]: I0306 01:47:51.088727 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e1ce1d0-edbf-4672-8528-94feaef7c9f2-kube-proxy\") pod \"kube-proxy-smpzd\" (UID: \"4e1ce1d0-edbf-4672-8528-94feaef7c9f2\") " pod="kube-system/kube-proxy-smpzd" Mar 6 01:47:51.089292 kubelet[2537]: I0306 01:47:51.088952 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e1ce1d0-edbf-4672-8528-94feaef7c9f2-xtables-lock\") pod \"kube-proxy-smpzd\" (UID: \"4e1ce1d0-edbf-4672-8528-94feaef7c9f2\") " pod="kube-system/kube-proxy-smpzd" Mar 6 01:47:51.089292 kubelet[2537]: I0306 01:47:51.088972 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e1ce1d0-edbf-4672-8528-94feaef7c9f2-lib-modules\") pod \"kube-proxy-smpzd\" (UID: \"4e1ce1d0-edbf-4672-8528-94feaef7c9f2\") " pod="kube-system/kube-proxy-smpzd" Mar 6 01:47:51.089292 kubelet[2537]: I0306 01:47:51.088990 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kksl\" (UniqueName: \"kubernetes.io/projected/4e1ce1d0-edbf-4672-8528-94feaef7c9f2-kube-api-access-6kksl\") pod \"kube-proxy-smpzd\" (UID: \"4e1ce1d0-edbf-4672-8528-94feaef7c9f2\") " pod="kube-system/kube-proxy-smpzd" Mar 6 01:47:51.124292 systemd[1]: Created slice kubepods-besteffort-pod4e1ce1d0_edbf_4672_8528_94feaef7c9f2.slice - libcontainer container kubepods-besteffort-pod4e1ce1d0_edbf_4672_8528_94feaef7c9f2.slice. Mar 6 01:47:51.264661 systemd[1]: Created slice kubepods-besteffort-podcfefe211_9cf5_4a3e_ab46_bd393a393eaf.slice - libcontainer container kubepods-besteffort-podcfefe211_9cf5_4a3e_ab46_bd393a393eaf.slice. Mar 6 01:47:51.391480 kubelet[2537]: I0306 01:47:51.391282 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cfefe211-9cf5-4a3e-ab46-bd393a393eaf-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-6srww\" (UID: \"cfefe211-9cf5-4a3e-ab46-bd393a393eaf\") " pod="tigera-operator/tigera-operator-6cf4cccc57-6srww" Mar 6 01:47:51.391480 kubelet[2537]: I0306 01:47:51.391333 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5k42\" (UniqueName: \"kubernetes.io/projected/cfefe211-9cf5-4a3e-ab46-bd393a393eaf-kube-api-access-p5k42\") pod \"tigera-operator-6cf4cccc57-6srww\" (UID: \"cfefe211-9cf5-4a3e-ab46-bd393a393eaf\") " pod="tigera-operator/tigera-operator-6cf4cccc57-6srww" Mar 6 01:47:51.445030 kubelet[2537]: E0306 01:47:51.444993 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:51.446938 containerd[1454]: time="2026-03-06T01:47:51.446175561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-smpzd,Uid:4e1ce1d0-edbf-4672-8528-94feaef7c9f2,Namespace:kube-system,Attempt:0,}" Mar 6 01:47:51.515952 containerd[1454]: time="2026-03-06T01:47:51.515702803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:47:51.518059 containerd[1454]: time="2026-03-06T01:47:51.517660908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:47:51.518059 containerd[1454]: time="2026-03-06T01:47:51.517760854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:47:51.518059 containerd[1454]: time="2026-03-06T01:47:51.517923717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:47:51.567382 systemd[1]: Started cri-containerd-d663e9ab3cd3f599caa2eac5ecc193d61ac1fa3a1c179ad8aa226dad6f495e30.scope - libcontainer container d663e9ab3cd3f599caa2eac5ecc193d61ac1fa3a1c179ad8aa226dad6f495e30. Mar 6 01:47:51.575201 containerd[1454]: time="2026-03-06T01:47:51.574956291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-6srww,Uid:cfefe211-9cf5-4a3e-ab46-bd393a393eaf,Namespace:tigera-operator,Attempt:0,}" Mar 6 01:47:51.725589 containerd[1454]: time="2026-03-06T01:47:51.724978984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-smpzd,Uid:4e1ce1d0-edbf-4672-8528-94feaef7c9f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d663e9ab3cd3f599caa2eac5ecc193d61ac1fa3a1c179ad8aa226dad6f495e30\"" Mar 6 01:47:51.729184 kubelet[2537]: E0306 01:47:51.728227 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:51.729385 containerd[1454]: time="2026-03-06T01:47:51.727502511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:47:51.729385 containerd[1454]: time="2026-03-06T01:47:51.727608409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:47:51.729385 containerd[1454]: time="2026-03-06T01:47:51.727623918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:47:51.729385 containerd[1454]: time="2026-03-06T01:47:51.728409071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:47:51.735538 containerd[1454]: time="2026-03-06T01:47:51.735413121Z" level=info msg="CreateContainer within sandbox \"d663e9ab3cd3f599caa2eac5ecc193d61ac1fa3a1c179ad8aa226dad6f495e30\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 01:47:51.755992 containerd[1454]: time="2026-03-06T01:47:51.755817324Z" level=info msg="CreateContainer within sandbox \"d663e9ab3cd3f599caa2eac5ecc193d61ac1fa3a1c179ad8aa226dad6f495e30\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"96e9c6a69043a1077ff68d5c54d93e6fa106830f67b035c5f4ab01421b2595e0\"" Mar 6 01:47:51.756737 containerd[1454]: time="2026-03-06T01:47:51.756712990Z" level=info msg="StartContainer for \"96e9c6a69043a1077ff68d5c54d93e6fa106830f67b035c5f4ab01421b2595e0\"" Mar 6 01:47:51.782297 systemd[1]: Started cri-containerd-bd37a5939686a5b6843dd3a39dc60fb0b95398b8b25b7575ec74b708c24bb8d3.scope - libcontainer container bd37a5939686a5b6843dd3a39dc60fb0b95398b8b25b7575ec74b708c24bb8d3. Mar 6 01:47:51.830487 systemd[1]: Started cri-containerd-96e9c6a69043a1077ff68d5c54d93e6fa106830f67b035c5f4ab01421b2595e0.scope - libcontainer container 96e9c6a69043a1077ff68d5c54d93e6fa106830f67b035c5f4ab01421b2595e0. Mar 6 01:47:51.855716 containerd[1454]: time="2026-03-06T01:47:51.855660803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-6srww,Uid:cfefe211-9cf5-4a3e-ab46-bd393a393eaf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bd37a5939686a5b6843dd3a39dc60fb0b95398b8b25b7575ec74b708c24bb8d3\"" Mar 6 01:47:51.860725 containerd[1454]: time="2026-03-06T01:47:51.860624764Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 01:47:51.900701 containerd[1454]: time="2026-03-06T01:47:51.900536848Z" level=info msg="StartContainer for \"96e9c6a69043a1077ff68d5c54d93e6fa106830f67b035c5f4ab01421b2595e0\" returns successfully" Mar 6 01:47:52.079284 kubelet[2537]: E0306 01:47:52.079115 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:52.537315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169069763.mount: Deactivated successfully. Mar 6 01:47:52.554114 kubelet[2537]: E0306 01:47:52.554060 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:52.569809 kubelet[2537]: I0306 01:47:52.569578 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-smpzd" podStartSLOduration=1.569558479 podStartE2EDuration="1.569558479s" podCreationTimestamp="2026-03-06 01:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:47:52.089871271 +0000 UTC m=+6.199662387" watchObservedRunningTime="2026-03-06 01:47:52.569558479 +0000 UTC m=+6.679349595" Mar 6 01:47:54.211324 containerd[1454]: time="2026-03-06T01:47:54.211259846Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:54.212476 containerd[1454]: time="2026-03-06T01:47:54.212415558Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 6 01:47:54.213901 containerd[1454]: time="2026-03-06T01:47:54.213771074Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:54.217760 containerd[1454]: time="2026-03-06T01:47:54.216673940Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:47:54.217963 containerd[1454]: time="2026-03-06T01:47:54.217806029Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.356908267s" Mar 6 01:47:54.217963 containerd[1454]: time="2026-03-06T01:47:54.217887079Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 6 01:47:54.225791 containerd[1454]: time="2026-03-06T01:47:54.225730959Z" level=info msg="CreateContainer within sandbox \"bd37a5939686a5b6843dd3a39dc60fb0b95398b8b25b7575ec74b708c24bb8d3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 01:47:54.246545 containerd[1454]: time="2026-03-06T01:47:54.246457848Z" level=info msg="CreateContainer within sandbox \"bd37a5939686a5b6843dd3a39dc60fb0b95398b8b25b7575ec74b708c24bb8d3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f5484fc96f69bc371f809eaa978c2458bd1bab16129dd2612317b420bf083ddd\"" Mar 6 01:47:54.247318 containerd[1454]: time="2026-03-06T01:47:54.247213382Z" level=info msg="StartContainer for \"f5484fc96f69bc371f809eaa978c2458bd1bab16129dd2612317b420bf083ddd\"" Mar 6 01:47:54.320293 systemd[1]: Started cri-containerd-f5484fc96f69bc371f809eaa978c2458bd1bab16129dd2612317b420bf083ddd.scope - libcontainer container f5484fc96f69bc371f809eaa978c2458bd1bab16129dd2612317b420bf083ddd. Mar 6 01:47:54.396111 containerd[1454]: time="2026-03-06T01:47:54.395988867Z" level=info msg="StartContainer for \"f5484fc96f69bc371f809eaa978c2458bd1bab16129dd2612317b420bf083ddd\" returns successfully" Mar 6 01:47:55.104574 kubelet[2537]: I0306 01:47:55.104446 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-6srww" podStartSLOduration=1.74486916 podStartE2EDuration="4.104433547s" podCreationTimestamp="2026-03-06 01:47:51 +0000 UTC" firstStartedPulling="2026-03-06 01:47:51.859780462 +0000 UTC m=+5.969571577" lastFinishedPulling="2026-03-06 01:47:54.219344847 +0000 UTC m=+8.329135964" observedRunningTime="2026-03-06 01:47:55.104239095 +0000 UTC m=+9.214030210" watchObservedRunningTime="2026-03-06 01:47:55.104433547 +0000 UTC m=+9.214224663" Mar 6 01:47:56.930818 update_engine[1444]: I20260306 01:47:56.930521 1444 update_attempter.cc:509] Updating boot flags... Mar 6 01:47:57.060228 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2915) Mar 6 01:47:57.167274 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2913) Mar 6 01:47:57.237243 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2913) Mar 6 01:47:57.459048 kubelet[2537]: E0306 01:47:57.458954 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:47:59.186428 kubelet[2537]: E0306 01:47:59.186328 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:00.327040 sudo[1648]: pam_unix(sudo:session): session closed for user root Mar 6 01:48:00.335886 sshd[1645]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:00.349639 systemd[1]: sshd@8-10.0.0.156:22-10.0.0.1:37710.service: Deactivated successfully. Mar 6 01:48:00.352929 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 01:48:00.353564 systemd[1]: session-9.scope: Consumed 5.651s CPU time, 159.9M memory peak, 0B memory swap peak. Mar 6 01:48:00.358075 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Mar 6 01:48:00.364191 systemd-logind[1442]: Removed session 9. Mar 6 01:48:02.175267 systemd[1]: Created slice kubepods-besteffort-podf59993d7_61f9_4935_9bd1_1a0e830cc72e.slice - libcontainer container kubepods-besteffort-podf59993d7_61f9_4935_9bd1_1a0e830cc72e.slice. Mar 6 01:48:02.196184 kubelet[2537]: I0306 01:48:02.196040 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f59993d7-61f9-4935-9bd1-1a0e830cc72e-tigera-ca-bundle\") pod \"calico-typha-6d899ddfdf-mjsqx\" (UID: \"f59993d7-61f9-4935-9bd1-1a0e830cc72e\") " pod="calico-system/calico-typha-6d899ddfdf-mjsqx" Mar 6 01:48:02.196788 kubelet[2537]: I0306 01:48:02.196240 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f59993d7-61f9-4935-9bd1-1a0e830cc72e-typha-certs\") pod \"calico-typha-6d899ddfdf-mjsqx\" (UID: \"f59993d7-61f9-4935-9bd1-1a0e830cc72e\") " pod="calico-system/calico-typha-6d899ddfdf-mjsqx" Mar 6 01:48:02.196788 kubelet[2537]: I0306 01:48:02.196266 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmlr\" (UniqueName: \"kubernetes.io/projected/f59993d7-61f9-4935-9bd1-1a0e830cc72e-kube-api-access-jqmlr\") pod \"calico-typha-6d899ddfdf-mjsqx\" (UID: \"f59993d7-61f9-4935-9bd1-1a0e830cc72e\") " pod="calico-system/calico-typha-6d899ddfdf-mjsqx" Mar 6 01:48:02.205861 systemd[1]: Created slice kubepods-besteffort-pod519720db_520f_4c44_a4be_e0cd134ebdf6.slice - libcontainer container kubepods-besteffort-pod519720db_520f_4c44_a4be_e0cd134ebdf6.slice. Mar 6 01:48:02.305233 kubelet[2537]: E0306 01:48:02.304940 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wrl2h" podUID="32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d" Mar 6 01:48:02.397785 kubelet[2537]: I0306 01:48:02.397663 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-cni-bin-dir\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.397785 kubelet[2537]: I0306 01:48:02.397723 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-bpffs\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.397785 kubelet[2537]: I0306 01:48:02.397744 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-cni-net-dir\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.397785 kubelet[2537]: I0306 01:48:02.397764 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/519720db-520f-4c44-a4be-e0cd134ebdf6-node-certs\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.397785 kubelet[2537]: I0306 01:48:02.397784 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-var-lib-calico\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.398188 kubelet[2537]: I0306 01:48:02.397801 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqtmq\" (UniqueName: \"kubernetes.io/projected/519720db-520f-4c44-a4be-e0cd134ebdf6-kube-api-access-rqtmq\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.398188 kubelet[2537]: I0306 01:48:02.397857 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-sys-fs\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.398188 kubelet[2537]: I0306 01:48:02.397905 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-var-run-calico\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.398188 kubelet[2537]: I0306 01:48:02.397973 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-flexvol-driver-host\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.398188 kubelet[2537]: I0306 01:48:02.397997 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-nodeproc\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.398455 kubelet[2537]: I0306 01:48:02.398057 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-cni-log-dir\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.398455 kubelet[2537]: I0306 01:48:02.398089 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/519720db-520f-4c44-a4be-e0cd134ebdf6-tigera-ca-bundle\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.398455 kubelet[2537]: I0306 01:48:02.398180 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-xtables-lock\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.398455 kubelet[2537]: I0306 01:48:02.398202 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-lib-modules\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.398455 kubelet[2537]: I0306 01:48:02.398216 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/519720db-520f-4c44-a4be-e0cd134ebdf6-policysync\") pod \"calico-node-26pps\" (UID: \"519720db-520f-4c44-a4be-e0cd134ebdf6\") " pod="calico-system/calico-node-26pps" Mar 6 01:48:02.484511 kubelet[2537]: E0306 01:48:02.484410 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:02.485733 containerd[1454]: time="2026-03-06T01:48:02.485324267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d899ddfdf-mjsqx,Uid:f59993d7-61f9-4935-9bd1-1a0e830cc72e,Namespace:calico-system,Attempt:0,}" Mar 6 01:48:02.507760 kubelet[2537]: I0306 01:48:02.507718 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d-kubelet-dir\") pod \"csi-node-driver-wrl2h\" (UID: \"32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d\") " pod="calico-system/csi-node-driver-wrl2h" Mar 6 01:48:02.507890 kubelet[2537]: I0306 01:48:02.507772 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d-registration-dir\") pod \"csi-node-driver-wrl2h\" (UID: \"32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d\") " pod="calico-system/csi-node-driver-wrl2h" Mar 6 01:48:02.507890 kubelet[2537]: I0306 01:48:02.507845 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d-varrun\") pod \"csi-node-driver-wrl2h\" (UID: \"32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d\") " pod="calico-system/csi-node-driver-wrl2h" Mar 6 01:48:02.507890 kubelet[2537]: I0306 01:48:02.507863 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zlqb\" (UniqueName: \"kubernetes.io/projected/32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d-kube-api-access-7zlqb\") pod \"csi-node-driver-wrl2h\" (UID: \"32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d\") " pod="calico-system/csi-node-driver-wrl2h" Mar 6 01:48:02.507971 kubelet[2537]: I0306 01:48:02.507959 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d-socket-dir\") pod \"csi-node-driver-wrl2h\" (UID: \"32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d\") " pod="calico-system/csi-node-driver-wrl2h" Mar 6 01:48:02.513271 kubelet[2537]: E0306 01:48:02.512938 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.513271 kubelet[2537]: W0306 01:48:02.512955 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.513271 kubelet[2537]: E0306 01:48:02.512992 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.513587 kubelet[2537]: E0306 01:48:02.513525 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.513587 kubelet[2537]: W0306 01:48:02.513558 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.513704 kubelet[2537]: E0306 01:48:02.513647 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.528894 containerd[1454]: time="2026-03-06T01:48:02.528690409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:48:02.528894 containerd[1454]: time="2026-03-06T01:48:02.528755731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:48:02.528894 containerd[1454]: time="2026-03-06T01:48:02.528766341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:02.530507 containerd[1454]: time="2026-03-06T01:48:02.528877128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:02.530542 kubelet[2537]: E0306 01:48:02.528884 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.530542 kubelet[2537]: W0306 01:48:02.528902 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.530542 kubelet[2537]: E0306 01:48:02.528924 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.532251 kubelet[2537]: E0306 01:48:02.532211 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.532306 kubelet[2537]: W0306 01:48:02.532252 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.532306 kubelet[2537]: E0306 01:48:02.532269 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.533417 kubelet[2537]: E0306 01:48:02.533359 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.533417 kubelet[2537]: W0306 01:48:02.533397 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.533417 kubelet[2537]: E0306 01:48:02.533411 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.534839 kubelet[2537]: E0306 01:48:02.534112 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.534839 kubelet[2537]: W0306 01:48:02.534183 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.534839 kubelet[2537]: E0306 01:48:02.534195 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.535358 kubelet[2537]: E0306 01:48:02.535296 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.535358 kubelet[2537]: W0306 01:48:02.535323 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.535358 kubelet[2537]: E0306 01:48:02.535334 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.537574 kubelet[2537]: E0306 01:48:02.537517 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.537574 kubelet[2537]: W0306 01:48:02.537541 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.537574 kubelet[2537]: E0306 01:48:02.537552 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.537907 kubelet[2537]: E0306 01:48:02.537889 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.537907 kubelet[2537]: W0306 01:48:02.537902 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.538175 kubelet[2537]: E0306 01:48:02.537912 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.538465 kubelet[2537]: E0306 01:48:02.538337 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.538465 kubelet[2537]: W0306 01:48:02.538369 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.538465 kubelet[2537]: E0306 01:48:02.538378 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.538705 kubelet[2537]: E0306 01:48:02.538678 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.538705 kubelet[2537]: W0306 01:48:02.538691 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.538705 kubelet[2537]: E0306 01:48:02.538702 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.539077 kubelet[2537]: E0306 01:48:02.539046 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.539077 kubelet[2537]: W0306 01:48:02.539059 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.539077 kubelet[2537]: E0306 01:48:02.539068 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.539441 kubelet[2537]: E0306 01:48:02.539412 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.539441 kubelet[2537]: W0306 01:48:02.539439 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.539544 kubelet[2537]: E0306 01:48:02.539449 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.539842 kubelet[2537]: E0306 01:48:02.539756 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.539842 kubelet[2537]: W0306 01:48:02.539784 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.539842 kubelet[2537]: E0306 01:48:02.539794 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.540308 kubelet[2537]: E0306 01:48:02.540218 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.540308 kubelet[2537]: W0306 01:48:02.540246 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.540308 kubelet[2537]: E0306 01:48:02.540256 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.540955 kubelet[2537]: E0306 01:48:02.540884 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.540955 kubelet[2537]: W0306 01:48:02.540909 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.540955 kubelet[2537]: E0306 01:48:02.540919 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.562005 kubelet[2537]: E0306 01:48:02.561927 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:02.563482 systemd[1]: Started cri-containerd-c54159f409d0b04065d07e62861331068b892bae5842fab0c17a97753830d583.scope - libcontainer container c54159f409d0b04065d07e62861331068b892bae5842fab0c17a97753830d583. Mar 6 01:48:02.601994 kubelet[2537]: E0306 01:48:02.601896 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.601994 kubelet[2537]: W0306 01:48:02.601939 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.601994 kubelet[2537]: E0306 01:48:02.601961 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.602440 kubelet[2537]: E0306 01:48:02.602386 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.602440 kubelet[2537]: W0306 01:48:02.602418 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.602440 kubelet[2537]: E0306 01:48:02.602431 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.602978 kubelet[2537]: E0306 01:48:02.602869 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.602978 kubelet[2537]: W0306 01:48:02.602908 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.602978 kubelet[2537]: E0306 01:48:02.602921 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.603542 kubelet[2537]: E0306 01:48:02.603453 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.603542 kubelet[2537]: W0306 01:48:02.603485 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.603542 kubelet[2537]: E0306 01:48:02.603497 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.603970 kubelet[2537]: E0306 01:48:02.603866 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.603970 kubelet[2537]: W0306 01:48:02.603898 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.603970 kubelet[2537]: E0306 01:48:02.603908 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.604328 kubelet[2537]: E0306 01:48:02.604234 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.604328 kubelet[2537]: W0306 01:48:02.604265 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.604328 kubelet[2537]: E0306 01:48:02.604275 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.604690 kubelet[2537]: E0306 01:48:02.604540 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.604690 kubelet[2537]: W0306 01:48:02.604549 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.604690 kubelet[2537]: E0306 01:48:02.604558 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.605025 kubelet[2537]: E0306 01:48:02.604991 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.605174 kubelet[2537]: W0306 01:48:02.605028 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.605174 kubelet[2537]: E0306 01:48:02.605053 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.605599 kubelet[2537]: E0306 01:48:02.605559 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.605599 kubelet[2537]: W0306 01:48:02.605573 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.605599 kubelet[2537]: E0306 01:48:02.605585 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.606164 kubelet[2537]: E0306 01:48:02.606075 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.606164 kubelet[2537]: W0306 01:48:02.606105 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.606293 kubelet[2537]: E0306 01:48:02.606116 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.606492 kubelet[2537]: E0306 01:48:02.606465 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.606570 kubelet[2537]: W0306 01:48:02.606493 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.606570 kubelet[2537]: E0306 01:48:02.606502 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.606795 kubelet[2537]: E0306 01:48:02.606752 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.606795 kubelet[2537]: W0306 01:48:02.606763 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.606795 kubelet[2537]: E0306 01:48:02.606773 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.607077 kubelet[2537]: E0306 01:48:02.607045 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.607077 kubelet[2537]: W0306 01:48:02.607056 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.607077 kubelet[2537]: E0306 01:48:02.607064 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.607574 kubelet[2537]: E0306 01:48:02.607507 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.607574 kubelet[2537]: W0306 01:48:02.607534 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.607574 kubelet[2537]: E0306 01:48:02.607543 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.607889 kubelet[2537]: E0306 01:48:02.607872 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.607889 kubelet[2537]: W0306 01:48:02.607883 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.608023 kubelet[2537]: E0306 01:48:02.607892 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.609348 kubelet[2537]: E0306 01:48:02.609332 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.609348 kubelet[2537]: W0306 01:48:02.609345 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.609448 kubelet[2537]: E0306 01:48:02.609359 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.612465 kubelet[2537]: E0306 01:48:02.612430 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.612465 kubelet[2537]: W0306 01:48:02.612444 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.612465 kubelet[2537]: E0306 01:48:02.612456 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.612960 kubelet[2537]: E0306 01:48:02.612910 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.612960 kubelet[2537]: W0306 01:48:02.612943 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.612960 kubelet[2537]: E0306 01:48:02.612954 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.613391 kubelet[2537]: E0306 01:48:02.613341 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.613488 kubelet[2537]: W0306 01:48:02.613442 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.613488 kubelet[2537]: E0306 01:48:02.613454 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.613881 kubelet[2537]: E0306 01:48:02.613791 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.613881 kubelet[2537]: W0306 01:48:02.613850 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.613881 kubelet[2537]: E0306 01:48:02.613860 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.614320 kubelet[2537]: E0306 01:48:02.614291 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.614320 kubelet[2537]: W0306 01:48:02.614316 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.614382 kubelet[2537]: E0306 01:48:02.614324 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.614624 kubelet[2537]: E0306 01:48:02.614595 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.614624 kubelet[2537]: W0306 01:48:02.614622 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.614668 kubelet[2537]: E0306 01:48:02.614634 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.614960 kubelet[2537]: E0306 01:48:02.614930 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.614960 kubelet[2537]: W0306 01:48:02.614956 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.615017 kubelet[2537]: E0306 01:48:02.614966 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.615332 kubelet[2537]: E0306 01:48:02.615302 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.615332 kubelet[2537]: W0306 01:48:02.615329 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.615397 kubelet[2537]: E0306 01:48:02.615341 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.615658 kubelet[2537]: E0306 01:48:02.615630 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.615658 kubelet[2537]: W0306 01:48:02.615654 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.615721 kubelet[2537]: E0306 01:48:02.615663 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.616079 kubelet[2537]: E0306 01:48:02.616032 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.616079 kubelet[2537]: W0306 01:48:02.616060 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.616079 kubelet[2537]: E0306 01:48:02.616070 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.616483 kubelet[2537]: E0306 01:48:02.616454 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.616483 kubelet[2537]: W0306 01:48:02.616480 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.616571 kubelet[2537]: E0306 01:48:02.616491 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.616933 kubelet[2537]: E0306 01:48:02.616896 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.616933 kubelet[2537]: W0306 01:48:02.616921 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.616933 kubelet[2537]: E0306 01:48:02.616930 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.617284 kubelet[2537]: E0306 01:48:02.617253 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.617284 kubelet[2537]: W0306 01:48:02.617281 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.617380 kubelet[2537]: E0306 01:48:02.617291 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.617642 kubelet[2537]: E0306 01:48:02.617594 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.617642 kubelet[2537]: W0306 01:48:02.617622 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.617642 kubelet[2537]: E0306 01:48:02.617631 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.617967 kubelet[2537]: E0306 01:48:02.617939 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.617967 kubelet[2537]: W0306 01:48:02.617966 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.618056 kubelet[2537]: E0306 01:48:02.617976 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.618376 kubelet[2537]: E0306 01:48:02.618346 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.618376 kubelet[2537]: W0306 01:48:02.618373 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.618428 kubelet[2537]: E0306 01:48:02.618382 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.618689 kubelet[2537]: E0306 01:48:02.618661 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.618689 kubelet[2537]: W0306 01:48:02.618688 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.618771 kubelet[2537]: E0306 01:48:02.618696 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.619078 kubelet[2537]: E0306 01:48:02.619049 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.619078 kubelet[2537]: W0306 01:48:02.619078 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.619216 kubelet[2537]: E0306 01:48:02.619087 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.619553 kubelet[2537]: E0306 01:48:02.619504 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.619553 kubelet[2537]: W0306 01:48:02.619536 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.619553 kubelet[2537]: E0306 01:48:02.619546 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.619871 kubelet[2537]: E0306 01:48:02.619807 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.619871 kubelet[2537]: W0306 01:48:02.619870 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.619916 kubelet[2537]: E0306 01:48:02.619880 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.620281 kubelet[2537]: E0306 01:48:02.620251 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.620281 kubelet[2537]: W0306 01:48:02.620275 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.620371 kubelet[2537]: E0306 01:48:02.620285 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.620652 kubelet[2537]: E0306 01:48:02.620624 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.620652 kubelet[2537]: W0306 01:48:02.620649 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.620707 kubelet[2537]: E0306 01:48:02.620660 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.620980 kubelet[2537]: E0306 01:48:02.620949 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.620980 kubelet[2537]: W0306 01:48:02.620975 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.621065 kubelet[2537]: E0306 01:48:02.620985 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.621383 kubelet[2537]: E0306 01:48:02.621353 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.621457 kubelet[2537]: W0306 01:48:02.621381 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.621457 kubelet[2537]: E0306 01:48:02.621455 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.636769 kubelet[2537]: E0306 01:48:02.636708 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:02.636769 kubelet[2537]: W0306 01:48:02.636725 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:02.636769 kubelet[2537]: E0306 01:48:02.636740 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:02.642667 containerd[1454]: time="2026-03-06T01:48:02.642454991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d899ddfdf-mjsqx,Uid:f59993d7-61f9-4935-9bd1-1a0e830cc72e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c54159f409d0b04065d07e62861331068b892bae5842fab0c17a97753830d583\"" Mar 6 01:48:02.643458 kubelet[2537]: E0306 01:48:02.643351 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:02.645490 containerd[1454]: time="2026-03-06T01:48:02.645429177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 01:48:02.813633 containerd[1454]: time="2026-03-06T01:48:02.813343955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-26pps,Uid:519720db-520f-4c44-a4be-e0cd134ebdf6,Namespace:calico-system,Attempt:0,}" Mar 6 01:48:02.855302 containerd[1454]: time="2026-03-06T01:48:02.854901848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:48:02.855302 containerd[1454]: time="2026-03-06T01:48:02.855025308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:48:02.855302 containerd[1454]: time="2026-03-06T01:48:02.855036129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:02.855533 containerd[1454]: time="2026-03-06T01:48:02.855385309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:02.883333 systemd[1]: Started cri-containerd-57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd.scope - libcontainer container 57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd. Mar 6 01:48:02.920702 containerd[1454]: time="2026-03-06T01:48:02.920659352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-26pps,Uid:519720db-520f-4c44-a4be-e0cd134ebdf6,Namespace:calico-system,Attempt:0,} returns sandbox id \"57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd\"" Mar 6 01:48:04.035980 kubelet[2537]: E0306 01:48:04.035608 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wrl2h" podUID="32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d" Mar 6 01:48:04.450191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318664304.mount: Deactivated successfully. Mar 6 01:48:06.039577 kubelet[2537]: E0306 01:48:06.039498 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wrl2h" podUID="32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d" Mar 6 01:48:06.293342 containerd[1454]: time="2026-03-06T01:48:06.293113549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:06.294114 containerd[1454]: time="2026-03-06T01:48:06.294060115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 6 01:48:06.295703 containerd[1454]: time="2026-03-06T01:48:06.295649238Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:06.298395 containerd[1454]: time="2026-03-06T01:48:06.298335238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:06.299337 containerd[1454]: time="2026-03-06T01:48:06.299282524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.653787564s" Mar 6 01:48:06.299382 containerd[1454]: time="2026-03-06T01:48:06.299337266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 6 01:48:06.300390 containerd[1454]: time="2026-03-06T01:48:06.300350336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 01:48:06.318261 containerd[1454]: time="2026-03-06T01:48:06.318208244Z" level=info msg="CreateContainer within sandbox \"c54159f409d0b04065d07e62861331068b892bae5842fab0c17a97753830d583\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 01:48:06.333752 containerd[1454]: time="2026-03-06T01:48:06.333689380Z" level=info msg="CreateContainer within sandbox \"c54159f409d0b04065d07e62861331068b892bae5842fab0c17a97753830d583\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"afa34e94de0b6b5d68cfb58d284f6eb50e6503b56ecfcd0a507432bf3d68f1b2\"" Mar 6 01:48:06.334457 containerd[1454]: time="2026-03-06T01:48:06.334338071Z" level=info msg="StartContainer for \"afa34e94de0b6b5d68cfb58d284f6eb50e6503b56ecfcd0a507432bf3d68f1b2\"" Mar 6 01:48:06.381332 systemd[1]: Started cri-containerd-afa34e94de0b6b5d68cfb58d284f6eb50e6503b56ecfcd0a507432bf3d68f1b2.scope - libcontainer container afa34e94de0b6b5d68cfb58d284f6eb50e6503b56ecfcd0a507432bf3d68f1b2. Mar 6 01:48:06.438932 containerd[1454]: time="2026-03-06T01:48:06.438859819Z" level=info msg="StartContainer for \"afa34e94de0b6b5d68cfb58d284f6eb50e6503b56ecfcd0a507432bf3d68f1b2\" returns successfully" Mar 6 01:48:07.127797 kubelet[2537]: E0306 01:48:07.127533 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:07.135938 kubelet[2537]: E0306 01:48:07.135905 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.135938 kubelet[2537]: W0306 01:48:07.135935 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.136090 kubelet[2537]: E0306 01:48:07.135953 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.136419 kubelet[2537]: E0306 01:48:07.136392 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.136419 kubelet[2537]: W0306 01:48:07.136405 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.136419 kubelet[2537]: E0306 01:48:07.136419 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.136787 kubelet[2537]: E0306 01:48:07.136749 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.136787 kubelet[2537]: W0306 01:48:07.136778 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.136933 kubelet[2537]: E0306 01:48:07.136800 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.137387 kubelet[2537]: E0306 01:48:07.137350 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.137387 kubelet[2537]: W0306 01:48:07.137374 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.137387 kubelet[2537]: E0306 01:48:07.137386 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.138908 kubelet[2537]: E0306 01:48:07.138523 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.138908 kubelet[2537]: W0306 01:48:07.138538 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.138908 kubelet[2537]: E0306 01:48:07.138550 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.138908 kubelet[2537]: E0306 01:48:07.138856 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.138908 kubelet[2537]: W0306 01:48:07.138867 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.138908 kubelet[2537]: E0306 01:48:07.138876 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.140331 kubelet[2537]: E0306 01:48:07.139191 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.140331 kubelet[2537]: W0306 01:48:07.139203 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.140331 kubelet[2537]: E0306 01:48:07.139212 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.140331 kubelet[2537]: I0306 01:48:07.139338 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-6d899ddfdf-mjsqx" podStartSLOduration=1.483842397 podStartE2EDuration="5.139330526s" podCreationTimestamp="2026-03-06 01:48:02 +0000 UTC" firstStartedPulling="2026-03-06 01:48:02.64471254 +0000 UTC m=+16.754503666" lastFinishedPulling="2026-03-06 01:48:06.300200679 +0000 UTC m=+20.409991795" observedRunningTime="2026-03-06 01:48:07.138527058 +0000 UTC m=+21.248318174" watchObservedRunningTime="2026-03-06 01:48:07.139330526 +0000 UTC m=+21.249121642" Mar 6 01:48:07.140331 kubelet[2537]: E0306 01:48:07.139539 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.140331 kubelet[2537]: W0306 01:48:07.139551 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.140331 kubelet[2537]: E0306 01:48:07.139562 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.140331 kubelet[2537]: E0306 01:48:07.140056 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.140598 kubelet[2537]: W0306 01:48:07.140065 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.140598 kubelet[2537]: E0306 01:48:07.140076 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.140598 kubelet[2537]: E0306 01:48:07.140344 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.140598 kubelet[2537]: W0306 01:48:07.140353 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.140598 kubelet[2537]: E0306 01:48:07.140363 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.140776 kubelet[2537]: E0306 01:48:07.140629 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.140776 kubelet[2537]: W0306 01:48:07.140638 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.140776 kubelet[2537]: E0306 01:48:07.140699 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.141273 kubelet[2537]: E0306 01:48:07.140978 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.141273 kubelet[2537]: W0306 01:48:07.140992 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.141273 kubelet[2537]: E0306 01:48:07.141001 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.141480 kubelet[2537]: E0306 01:48:07.141345 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.141480 kubelet[2537]: W0306 01:48:07.141354 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.141480 kubelet[2537]: E0306 01:48:07.141363 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.141783 kubelet[2537]: E0306 01:48:07.141754 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.141783 kubelet[2537]: W0306 01:48:07.141768 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.141783 kubelet[2537]: E0306 01:48:07.141777 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.142211 kubelet[2537]: E0306 01:48:07.142193 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.142211 kubelet[2537]: W0306 01:48:07.142207 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.142311 kubelet[2537]: E0306 01:48:07.142216 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.146578 kubelet[2537]: E0306 01:48:07.146548 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.146578 kubelet[2537]: W0306 01:48:07.146574 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.146660 kubelet[2537]: E0306 01:48:07.146586 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.147093 kubelet[2537]: E0306 01:48:07.147037 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.147093 kubelet[2537]: W0306 01:48:07.147068 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.147093 kubelet[2537]: E0306 01:48:07.147081 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.147571 kubelet[2537]: E0306 01:48:07.147533 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.147571 kubelet[2537]: W0306 01:48:07.147564 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.147571 kubelet[2537]: E0306 01:48:07.147578 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.148032 kubelet[2537]: E0306 01:48:07.147984 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.148032 kubelet[2537]: W0306 01:48:07.148020 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.148111 kubelet[2537]: E0306 01:48:07.148035 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.148505 kubelet[2537]: E0306 01:48:07.148447 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.148505 kubelet[2537]: W0306 01:48:07.148491 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.148505 kubelet[2537]: E0306 01:48:07.148505 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.149016 kubelet[2537]: E0306 01:48:07.148968 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.149016 kubelet[2537]: W0306 01:48:07.148998 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.149016 kubelet[2537]: E0306 01:48:07.149008 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.149431 kubelet[2537]: E0306 01:48:07.149404 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.149431 kubelet[2537]: W0306 01:48:07.149430 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.149562 kubelet[2537]: E0306 01:48:07.149443 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.150446 kubelet[2537]: E0306 01:48:07.150307 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.150446 kubelet[2537]: W0306 01:48:07.150324 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.150446 kubelet[2537]: E0306 01:48:07.150334 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.153778 kubelet[2537]: E0306 01:48:07.153588 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.153778 kubelet[2537]: W0306 01:48:07.153613 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.153778 kubelet[2537]: E0306 01:48:07.153628 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.154176 kubelet[2537]: E0306 01:48:07.154109 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.154425 kubelet[2537]: W0306 01:48:07.154263 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.154425 kubelet[2537]: E0306 01:48:07.154281 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.154675 kubelet[2537]: E0306 01:48:07.154661 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.154749 kubelet[2537]: W0306 01:48:07.154735 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.154797 kubelet[2537]: E0306 01:48:07.154786 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.155215 kubelet[2537]: E0306 01:48:07.155175 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.155454 kubelet[2537]: W0306 01:48:07.155273 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.155454 kubelet[2537]: E0306 01:48:07.155288 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.155669 kubelet[2537]: E0306 01:48:07.155656 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.155743 kubelet[2537]: W0306 01:48:07.155729 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.155798 kubelet[2537]: E0306 01:48:07.155787 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.156203 kubelet[2537]: E0306 01:48:07.156189 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.156274 kubelet[2537]: W0306 01:48:07.156262 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.156332 kubelet[2537]: E0306 01:48:07.156318 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.156700 kubelet[2537]: E0306 01:48:07.156687 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.156885 kubelet[2537]: W0306 01:48:07.156759 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.156885 kubelet[2537]: E0306 01:48:07.156773 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.157395 kubelet[2537]: E0306 01:48:07.157381 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.157467 kubelet[2537]: W0306 01:48:07.157454 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.157531 kubelet[2537]: E0306 01:48:07.157519 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.158092 kubelet[2537]: E0306 01:48:07.158078 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.158339 kubelet[2537]: W0306 01:48:07.158233 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.158339 kubelet[2537]: E0306 01:48:07.158248 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.158609 kubelet[2537]: E0306 01:48:07.158568 2537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:48:07.158609 kubelet[2537]: W0306 01:48:07.158579 2537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:48:07.158609 kubelet[2537]: E0306 01:48:07.158589 2537 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:48:07.428845 containerd[1454]: time="2026-03-06T01:48:07.428579256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:07.429851 containerd[1454]: time="2026-03-06T01:48:07.429780626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 6 01:48:07.432618 containerd[1454]: time="2026-03-06T01:48:07.432557477Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:07.435378 containerd[1454]: time="2026-03-06T01:48:07.435315699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:07.436261 containerd[1454]: time="2026-03-06T01:48:07.436202513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.13580571s" Mar 6 01:48:07.436261 containerd[1454]: time="2026-03-06T01:48:07.436251624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 6 01:48:07.444693 containerd[1454]: time="2026-03-06T01:48:07.444662043Z" level=info msg="CreateContainer within sandbox \"57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 01:48:07.462587 containerd[1454]: time="2026-03-06T01:48:07.462478189Z" level=info msg="CreateContainer within sandbox \"57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6f844b97ac884f582c9d5a111020763a54532eed71db89057ce4bc5894fca511\"" Mar 6 01:48:07.463428 containerd[1454]: time="2026-03-06T01:48:07.463386986Z" level=info msg="StartContainer for \"6f844b97ac884f582c9d5a111020763a54532eed71db89057ce4bc5894fca511\"" Mar 6 01:48:07.513433 systemd[1]: Started cri-containerd-6f844b97ac884f582c9d5a111020763a54532eed71db89057ce4bc5894fca511.scope - libcontainer container 6f844b97ac884f582c9d5a111020763a54532eed71db89057ce4bc5894fca511. Mar 6 01:48:07.565288 containerd[1454]: time="2026-03-06T01:48:07.565222148Z" level=info msg="StartContainer for \"6f844b97ac884f582c9d5a111020763a54532eed71db89057ce4bc5894fca511\" returns successfully" Mar 6 01:48:07.580012 systemd[1]: cri-containerd-6f844b97ac884f582c9d5a111020763a54532eed71db89057ce4bc5894fca511.scope: Deactivated successfully. Mar 6 01:48:07.624228 containerd[1454]: time="2026-03-06T01:48:07.623965465Z" level=info msg="shim disconnected" id=6f844b97ac884f582c9d5a111020763a54532eed71db89057ce4bc5894fca511 namespace=k8s.io Mar 6 01:48:07.624429 containerd[1454]: time="2026-03-06T01:48:07.624228966Z" level=warning msg="cleaning up after shim disconnected" id=6f844b97ac884f582c9d5a111020763a54532eed71db89057ce4bc5894fca511 namespace=k8s.io Mar 6 01:48:07.624429 containerd[1454]: time="2026-03-06T01:48:07.624279941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:48:08.039769 kubelet[2537]: E0306 01:48:08.039718 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wrl2h" podUID="32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d" Mar 6 01:48:08.131762 kubelet[2537]: E0306 01:48:08.131710 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:08.133355 containerd[1454]: time="2026-03-06T01:48:08.133246928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 01:48:08.309496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f844b97ac884f582c9d5a111020763a54532eed71db89057ce4bc5894fca511-rootfs.mount: Deactivated successfully. Mar 6 01:48:09.134100 kubelet[2537]: E0306 01:48:09.134024 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:10.036148 kubelet[2537]: E0306 01:48:10.036074 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wrl2h" podUID="32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d" Mar 6 01:48:11.738329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4203014061.mount: Deactivated successfully. Mar 6 01:48:12.015605 containerd[1454]: time="2026-03-06T01:48:12.015448770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:12.017071 containerd[1454]: time="2026-03-06T01:48:12.016877832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 6 01:48:12.018007 containerd[1454]: time="2026-03-06T01:48:12.017944031Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:12.021623 containerd[1454]: time="2026-03-06T01:48:12.021563522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.888278853s" Mar 6 01:48:12.021623 containerd[1454]: time="2026-03-06T01:48:12.021620648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 6 01:48:12.028677 containerd[1454]: time="2026-03-06T01:48:12.028561662Z" level=info msg="CreateContainer within sandbox \"57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 01:48:12.036987 kubelet[2537]: E0306 01:48:12.036323 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wrl2h" podUID="32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d" Mar 6 01:48:12.046642 containerd[1454]: time="2026-03-06T01:48:12.045762758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:12.060258 containerd[1454]: time="2026-03-06T01:48:12.060204372Z" level=info msg="CreateContainer within sandbox \"57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"eb7727f08537620b710791a962eda2cd094bf42c1660584ff7c2cb5f6efff7f3\"" Mar 6 01:48:12.060872 containerd[1454]: time="2026-03-06T01:48:12.060781260Z" level=info msg="StartContainer for \"eb7727f08537620b710791a962eda2cd094bf42c1660584ff7c2cb5f6efff7f3\"" Mar 6 01:48:12.135318 systemd[1]: Started cri-containerd-eb7727f08537620b710791a962eda2cd094bf42c1660584ff7c2cb5f6efff7f3.scope - libcontainer container eb7727f08537620b710791a962eda2cd094bf42c1660584ff7c2cb5f6efff7f3. Mar 6 01:48:12.192501 containerd[1454]: time="2026-03-06T01:48:12.192453188Z" level=info msg="StartContainer for \"eb7727f08537620b710791a962eda2cd094bf42c1660584ff7c2cb5f6efff7f3\" returns successfully" Mar 6 01:48:12.247262 systemd[1]: cri-containerd-eb7727f08537620b710791a962eda2cd094bf42c1660584ff7c2cb5f6efff7f3.scope: Deactivated successfully. Mar 6 01:48:12.351901 containerd[1454]: time="2026-03-06T01:48:12.351685772Z" level=info msg="shim disconnected" id=eb7727f08537620b710791a962eda2cd094bf42c1660584ff7c2cb5f6efff7f3 namespace=k8s.io Mar 6 01:48:12.351901 containerd[1454]: time="2026-03-06T01:48:12.351756273Z" level=warning msg="cleaning up after shim disconnected" id=eb7727f08537620b710791a962eda2cd094bf42c1660584ff7c2cb5f6efff7f3 namespace=k8s.io Mar 6 01:48:12.351901 containerd[1454]: time="2026-03-06T01:48:12.351766192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:48:12.738698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb7727f08537620b710791a962eda2cd094bf42c1660584ff7c2cb5f6efff7f3-rootfs.mount: Deactivated successfully. Mar 6 01:48:13.183733 containerd[1454]: time="2026-03-06T01:48:13.183268297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 01:48:14.039727 kubelet[2537]: E0306 01:48:14.039667 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wrl2h" podUID="32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d" Mar 6 01:48:15.304636 containerd[1454]: time="2026-03-06T01:48:15.304539098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:15.305439 containerd[1454]: time="2026-03-06T01:48:15.305379879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 6 01:48:15.306864 containerd[1454]: time="2026-03-06T01:48:15.306833068Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:15.309796 containerd[1454]: time="2026-03-06T01:48:15.309747018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:15.310850 containerd[1454]: time="2026-03-06T01:48:15.310773061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.127370263s" Mar 6 01:48:15.310850 containerd[1454]: time="2026-03-06T01:48:15.310851126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 6 01:48:15.316622 containerd[1454]: time="2026-03-06T01:48:15.316395245Z" level=info msg="CreateContainer within sandbox \"57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 01:48:15.340022 containerd[1454]: time="2026-03-06T01:48:15.339932702Z" level=info msg="CreateContainer within sandbox \"57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"68d80ac9a2e6e498b0ce8f99b46f64eb2edbcabd8b90945d434b44b0a961c142\"" Mar 6 01:48:15.340652 containerd[1454]: time="2026-03-06T01:48:15.340468671Z" level=info msg="StartContainer for \"68d80ac9a2e6e498b0ce8f99b46f64eb2edbcabd8b90945d434b44b0a961c142\"" Mar 6 01:48:15.382371 systemd[1]: run-containerd-runc-k8s.io-68d80ac9a2e6e498b0ce8f99b46f64eb2edbcabd8b90945d434b44b0a961c142-runc.dMmTXa.mount: Deactivated successfully. Mar 6 01:48:15.394326 systemd[1]: Started cri-containerd-68d80ac9a2e6e498b0ce8f99b46f64eb2edbcabd8b90945d434b44b0a961c142.scope - libcontainer container 68d80ac9a2e6e498b0ce8f99b46f64eb2edbcabd8b90945d434b44b0a961c142. Mar 6 01:48:15.436595 containerd[1454]: time="2026-03-06T01:48:15.436373548Z" level=info msg="StartContainer for \"68d80ac9a2e6e498b0ce8f99b46f64eb2edbcabd8b90945d434b44b0a961c142\" returns successfully" Mar 6 01:48:16.036791 kubelet[2537]: E0306 01:48:16.036680 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wrl2h" podUID="32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d" Mar 6 01:48:16.177792 systemd[1]: cri-containerd-68d80ac9a2e6e498b0ce8f99b46f64eb2edbcabd8b90945d434b44b0a961c142.scope: Deactivated successfully. Mar 6 01:48:16.222299 containerd[1454]: time="2026-03-06T01:48:16.222217338Z" level=info msg="shim disconnected" id=68d80ac9a2e6e498b0ce8f99b46f64eb2edbcabd8b90945d434b44b0a961c142 namespace=k8s.io Mar 6 01:48:16.222299 containerd[1454]: time="2026-03-06T01:48:16.222290194Z" level=warning msg="cleaning up after shim disconnected" id=68d80ac9a2e6e498b0ce8f99b46f64eb2edbcabd8b90945d434b44b0a961c142 namespace=k8s.io Mar 6 01:48:16.222299 containerd[1454]: time="2026-03-06T01:48:16.222301365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:48:16.245423 kubelet[2537]: I0306 01:48:16.245378 2537 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 6 01:48:16.293775 systemd[1]: Created slice kubepods-besteffort-podbd707b61_d1cf_4e6f_8b10_15d152fccaef.slice - libcontainer container kubepods-besteffort-podbd707b61_d1cf_4e6f_8b10_15d152fccaef.slice. Mar 6 01:48:16.323669 systemd[1]: Created slice kubepods-burstable-pod2d6af042_9f63_4188_b4d2_c221e72cdd50.slice - libcontainer container kubepods-burstable-pod2d6af042_9f63_4188_b4d2_c221e72cdd50.slice. Mar 6 01:48:16.324587 kubelet[2537]: I0306 01:48:16.324486 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6af042-9f63-4188-b4d2-c221e72cdd50-config-volume\") pod \"coredns-7d764666f9-9lrv6\" (UID: \"2d6af042-9f63-4188-b4d2-c221e72cdd50\") " pod="kube-system/coredns-7d764666f9-9lrv6" Mar 6 01:48:16.324587 kubelet[2537]: I0306 01:48:16.324548 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7582639d-989e-494f-9494-c73a5ce2a100-config-volume\") pod \"coredns-7d764666f9-6wdml\" (UID: \"7582639d-989e-494f-9494-c73a5ce2a100\") " pod="kube-system/coredns-7d764666f9-6wdml" Mar 6 01:48:16.324587 kubelet[2537]: I0306 01:48:16.324565 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmsm\" (UniqueName: \"kubernetes.io/projected/e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1-kube-api-access-zfmsm\") pod \"calico-apiserver-849779dc44-7w6q8\" (UID: \"e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1\") " pod="calico-system/calico-apiserver-849779dc44-7w6q8" Mar 6 01:48:16.324587 kubelet[2537]: I0306 01:48:16.324583 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7gf8\" (UniqueName: \"kubernetes.io/projected/cc178117-4377-4398-8b69-5a7eb386dc85-kube-api-access-x7gf8\") pod \"calico-apiserver-849779dc44-jsqkt\" (UID: \"cc178117-4377-4398-8b69-5a7eb386dc85\") " pod="calico-system/calico-apiserver-849779dc44-jsqkt" Mar 6 01:48:16.325353 kubelet[2537]: I0306 01:48:16.324600 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b8104ea-0f2b-4826-8b83-6a37cdde3bc1-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-sf89w\" (UID: \"0b8104ea-0f2b-4826-8b83-6a37cdde3bc1\") " pod="calico-system/goldmane-9f7667bb8-sf89w" Mar 6 01:48:16.325353 kubelet[2537]: I0306 01:48:16.324614 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc178117-4377-4398-8b69-5a7eb386dc85-calico-apiserver-certs\") pod \"calico-apiserver-849779dc44-jsqkt\" (UID: \"cc178117-4377-4398-8b69-5a7eb386dc85\") " pod="calico-system/calico-apiserver-849779dc44-jsqkt" Mar 6 01:48:16.325353 kubelet[2537]: I0306 01:48:16.324629 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd707b61-d1cf-4e6f-8b10-15d152fccaef-whisker-ca-bundle\") pod \"whisker-698654fb6d-bqsjg\" (UID: \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\") " pod="calico-system/whisker-698654fb6d-bqsjg" Mar 6 01:48:16.325353 kubelet[2537]: I0306 01:48:16.324645 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1-calico-apiserver-certs\") pod \"calico-apiserver-849779dc44-7w6q8\" (UID: \"e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1\") " pod="calico-system/calico-apiserver-849779dc44-7w6q8" Mar 6 01:48:16.325353 kubelet[2537]: I0306 01:48:16.324659 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrdxh\" (UniqueName: \"kubernetes.io/projected/7582639d-989e-494f-9494-c73a5ce2a100-kube-api-access-nrdxh\") pod \"coredns-7d764666f9-6wdml\" (UID: \"7582639d-989e-494f-9494-c73a5ce2a100\") " pod="kube-system/coredns-7d764666f9-6wdml" Mar 6 01:48:16.325748 kubelet[2537]: I0306 01:48:16.324675 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0b8104ea-0f2b-4826-8b83-6a37cdde3bc1-goldmane-key-pair\") pod \"goldmane-9f7667bb8-sf89w\" (UID: \"0b8104ea-0f2b-4826-8b83-6a37cdde3bc1\") " pod="calico-system/goldmane-9f7667bb8-sf89w" Mar 6 01:48:16.325748 kubelet[2537]: I0306 01:48:16.324688 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/bd707b61-d1cf-4e6f-8b10-15d152fccaef-nginx-config\") pod \"whisker-698654fb6d-bqsjg\" (UID: \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\") " pod="calico-system/whisker-698654fb6d-bqsjg" Mar 6 01:48:16.325748 kubelet[2537]: I0306 01:48:16.324701 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5gfx\" (UniqueName: \"kubernetes.io/projected/bd707b61-d1cf-4e6f-8b10-15d152fccaef-kube-api-access-z5gfx\") pod \"whisker-698654fb6d-bqsjg\" (UID: \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\") " pod="calico-system/whisker-698654fb6d-bqsjg" Mar 6 01:48:16.325748 kubelet[2537]: I0306 01:48:16.324716 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7kvl\" (UniqueName: \"kubernetes.io/projected/0b8104ea-0f2b-4826-8b83-6a37cdde3bc1-kube-api-access-f7kvl\") pod \"goldmane-9f7667bb8-sf89w\" (UID: \"0b8104ea-0f2b-4826-8b83-6a37cdde3bc1\") " pod="calico-system/goldmane-9f7667bb8-sf89w" Mar 6 01:48:16.325748 kubelet[2537]: I0306 01:48:16.324731 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5w47\" (UniqueName: \"kubernetes.io/projected/2d6af042-9f63-4188-b4d2-c221e72cdd50-kube-api-access-v5w47\") pod \"coredns-7d764666f9-9lrv6\" (UID: \"2d6af042-9f63-4188-b4d2-c221e72cdd50\") " pod="kube-system/coredns-7d764666f9-9lrv6" Mar 6 01:48:16.325948 kubelet[2537]: I0306 01:48:16.324747 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8104ea-0f2b-4826-8b83-6a37cdde3bc1-config\") pod \"goldmane-9f7667bb8-sf89w\" (UID: \"0b8104ea-0f2b-4826-8b83-6a37cdde3bc1\") " pod="calico-system/goldmane-9f7667bb8-sf89w" Mar 6 01:48:16.325948 kubelet[2537]: I0306 01:48:16.324763 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd707b61-d1cf-4e6f-8b10-15d152fccaef-whisker-backend-key-pair\") pod \"whisker-698654fb6d-bqsjg\" (UID: \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\") " pod="calico-system/whisker-698654fb6d-bqsjg" Mar 6 01:48:16.331618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68d80ac9a2e6e498b0ce8f99b46f64eb2edbcabd8b90945d434b44b0a961c142-rootfs.mount: Deactivated successfully. Mar 6 01:48:16.339256 systemd[1]: Created slice kubepods-burstable-pod7582639d_989e_494f_9494_c73a5ce2a100.slice - libcontainer container kubepods-burstable-pod7582639d_989e_494f_9494_c73a5ce2a100.slice. Mar 6 01:48:16.347554 systemd[1]: Created slice kubepods-besteffort-podcc178117_4377_4398_8b69_5a7eb386dc85.slice - libcontainer container kubepods-besteffort-podcc178117_4377_4398_8b69_5a7eb386dc85.slice. Mar 6 01:48:16.356632 systemd[1]: Created slice kubepods-besteffort-pode62d0a0a_7e1b_4607_a9ed_d2e56e6ebec1.slice - libcontainer container kubepods-besteffort-pode62d0a0a_7e1b_4607_a9ed_d2e56e6ebec1.slice. Mar 6 01:48:16.363924 systemd[1]: Created slice kubepods-besteffort-pod0b8104ea_0f2b_4826_8b83_6a37cdde3bc1.slice - libcontainer container kubepods-besteffort-pod0b8104ea_0f2b_4826_8b83_6a37cdde3bc1.slice. Mar 6 01:48:16.370977 systemd[1]: Created slice kubepods-besteffort-pod17d574a6_9dbc_4f4c_a51a_e2c93d76716b.slice - libcontainer container kubepods-besteffort-pod17d574a6_9dbc_4f4c_a51a_e2c93d76716b.slice. Mar 6 01:48:16.425092 kubelet[2537]: I0306 01:48:16.424955 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17d574a6-9dbc-4f4c-a51a-e2c93d76716b-tigera-ca-bundle\") pod \"calico-kube-controllers-5f669bb65f-fqbtc\" (UID: \"17d574a6-9dbc-4f4c-a51a-e2c93d76716b\") " pod="calico-system/calico-kube-controllers-5f669bb65f-fqbtc" Mar 6 01:48:16.425092 kubelet[2537]: I0306 01:48:16.425034 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvmb\" (UniqueName: \"kubernetes.io/projected/17d574a6-9dbc-4f4c-a51a-e2c93d76716b-kube-api-access-ltvmb\") pod \"calico-kube-controllers-5f669bb65f-fqbtc\" (UID: \"17d574a6-9dbc-4f4c-a51a-e2c93d76716b\") " pod="calico-system/calico-kube-controllers-5f669bb65f-fqbtc" Mar 6 01:48:16.605917 containerd[1454]: time="2026-03-06T01:48:16.605755052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698654fb6d-bqsjg,Uid:bd707b61-d1cf-4e6f-8b10-15d152fccaef,Namespace:calico-system,Attempt:0,}" Mar 6 01:48:16.640965 kubelet[2537]: E0306 01:48:16.640658 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:16.642611 containerd[1454]: time="2026-03-06T01:48:16.642524487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9lrv6,Uid:2d6af042-9f63-4188-b4d2-c221e72cdd50,Namespace:kube-system,Attempt:0,}" Mar 6 01:48:16.646495 kubelet[2537]: E0306 01:48:16.646288 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:16.646855 containerd[1454]: time="2026-03-06T01:48:16.646769133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6wdml,Uid:7582639d-989e-494f-9494-c73a5ce2a100,Namespace:kube-system,Attempt:0,}" Mar 6 01:48:16.655581 containerd[1454]: time="2026-03-06T01:48:16.655447546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849779dc44-jsqkt,Uid:cc178117-4377-4398-8b69-5a7eb386dc85,Namespace:calico-system,Attempt:0,}" Mar 6 01:48:16.663416 containerd[1454]: time="2026-03-06T01:48:16.663230383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849779dc44-7w6q8,Uid:e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1,Namespace:calico-system,Attempt:0,}" Mar 6 01:48:16.671764 containerd[1454]: time="2026-03-06T01:48:16.671265278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-sf89w,Uid:0b8104ea-0f2b-4826-8b83-6a37cdde3bc1,Namespace:calico-system,Attempt:0,}" Mar 6 01:48:16.683336 containerd[1454]: time="2026-03-06T01:48:16.683309542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f669bb65f-fqbtc,Uid:17d574a6-9dbc-4f4c-a51a-e2c93d76716b,Namespace:calico-system,Attempt:0,}" Mar 6 01:48:16.805322 containerd[1454]: time="2026-03-06T01:48:16.805273601Z" level=error msg="Failed to destroy network for sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.806466 containerd[1454]: time="2026-03-06T01:48:16.806267419Z" level=error msg="Failed to destroy network for sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.806922 containerd[1454]: time="2026-03-06T01:48:16.806893617Z" level=error msg="encountered an error cleaning up failed sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.807026 containerd[1454]: time="2026-03-06T01:48:16.807004785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6wdml,Uid:7582639d-989e-494f-9494-c73a5ce2a100,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.809023 containerd[1454]: time="2026-03-06T01:48:16.808996602Z" level=error msg="encountered an error cleaning up failed sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.809879 containerd[1454]: time="2026-03-06T01:48:16.809845596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9lrv6,Uid:2d6af042-9f63-4188-b4d2-c221e72cdd50,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.827034 kubelet[2537]: E0306 01:48:16.826966 2537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.827206 kubelet[2537]: E0306 01:48:16.827040 2537 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-6wdml" Mar 6 01:48:16.827206 kubelet[2537]: E0306 01:48:16.827058 2537 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-6wdml" Mar 6 01:48:16.827206 kubelet[2537]: E0306 01:48:16.827098 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-6wdml_kube-system(7582639d-989e-494f-9494-c73a5ce2a100)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-6wdml_kube-system(7582639d-989e-494f-9494-c73a5ce2a100)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-6wdml" podUID="7582639d-989e-494f-9494-c73a5ce2a100" Mar 6 01:48:16.827367 kubelet[2537]: E0306 01:48:16.827254 2537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.827367 kubelet[2537]: E0306 01:48:16.827276 2537 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-9lrv6" Mar 6 01:48:16.827367 kubelet[2537]: E0306 01:48:16.827288 2537 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-9lrv6" Mar 6 01:48:16.827444 kubelet[2537]: E0306 01:48:16.827315 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-9lrv6_kube-system(2d6af042-9f63-4188-b4d2-c221e72cdd50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-9lrv6_kube-system(2d6af042-9f63-4188-b4d2-c221e72cdd50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-9lrv6" podUID="2d6af042-9f63-4188-b4d2-c221e72cdd50" Mar 6 01:48:16.838227 containerd[1454]: time="2026-03-06T01:48:16.838112395Z" level=error msg="Failed to destroy network for sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.840584 containerd[1454]: time="2026-03-06T01:48:16.840402638Z" level=error msg="encountered an error cleaning up failed sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.840584 containerd[1454]: time="2026-03-06T01:48:16.840458121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698654fb6d-bqsjg,Uid:bd707b61-d1cf-4e6f-8b10-15d152fccaef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.840772 kubelet[2537]: E0306 01:48:16.840711 2537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.840772 kubelet[2537]: E0306 01:48:16.840751 2537 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-698654fb6d-bqsjg" Mar 6 01:48:16.840772 kubelet[2537]: E0306 01:48:16.840766 2537 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-698654fb6d-bqsjg" Mar 6 01:48:16.840901 kubelet[2537]: E0306 01:48:16.840798 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-698654fb6d-bqsjg_calico-system(bd707b61-d1cf-4e6f-8b10-15d152fccaef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-698654fb6d-bqsjg_calico-system(bd707b61-d1cf-4e6f-8b10-15d152fccaef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-698654fb6d-bqsjg" podUID="bd707b61-d1cf-4e6f-8b10-15d152fccaef" Mar 6 01:48:16.872065 containerd[1454]: time="2026-03-06T01:48:16.870767799Z" level=error msg="Failed to destroy network for sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.872663 containerd[1454]: time="2026-03-06T01:48:16.872630756Z" level=error msg="encountered an error cleaning up failed sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.872899 containerd[1454]: time="2026-03-06T01:48:16.872762753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f669bb65f-fqbtc,Uid:17d574a6-9dbc-4f4c-a51a-e2c93d76716b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.873449 containerd[1454]: time="2026-03-06T01:48:16.872672118Z" level=error msg="Failed to destroy network for sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.874194 containerd[1454]: time="2026-03-06T01:48:16.874167644Z" level=error msg="encountered an error cleaning up failed sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.874304 containerd[1454]: time="2026-03-06T01:48:16.874283751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849779dc44-7w6q8,Uid:e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.874766 kubelet[2537]: E0306 01:48:16.874717 2537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.874863 kubelet[2537]: E0306 01:48:16.874785 2537 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-849779dc44-7w6q8" Mar 6 01:48:16.874863 kubelet[2537]: E0306 01:48:16.874830 2537 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-849779dc44-7w6q8" Mar 6 01:48:16.874922 kubelet[2537]: E0306 01:48:16.874883 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849779dc44-7w6q8_calico-system(e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849779dc44-7w6q8_calico-system(e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-849779dc44-7w6q8" podUID="e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1" Mar 6 01:48:16.875539 kubelet[2537]: E0306 01:48:16.875367 2537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.875685 kubelet[2537]: E0306 01:48:16.875578 2537 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f669bb65f-fqbtc" Mar 6 01:48:16.876263 kubelet[2537]: E0306 01:48:16.876020 2537 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f669bb65f-fqbtc" Mar 6 01:48:16.876263 kubelet[2537]: E0306 01:48:16.876192 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f669bb65f-fqbtc_calico-system(17d574a6-9dbc-4f4c-a51a-e2c93d76716b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f669bb65f-fqbtc_calico-system(17d574a6-9dbc-4f4c-a51a-e2c93d76716b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f669bb65f-fqbtc" podUID="17d574a6-9dbc-4f4c-a51a-e2c93d76716b" Mar 6 01:48:16.881395 containerd[1454]: time="2026-03-06T01:48:16.881363872Z" level=error msg="Failed to destroy network for sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.881979 containerd[1454]: time="2026-03-06T01:48:16.881939105Z" level=error msg="encountered an error cleaning up failed sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.882115 containerd[1454]: time="2026-03-06T01:48:16.882063608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-sf89w,Uid:0b8104ea-0f2b-4826-8b83-6a37cdde3bc1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.882388 kubelet[2537]: E0306 01:48:16.882318 2537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.882431 kubelet[2537]: E0306 01:48:16.882402 2537 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-sf89w" Mar 6 01:48:16.882431 kubelet[2537]: E0306 01:48:16.882421 2537 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-sf89w" Mar 6 01:48:16.882494 kubelet[2537]: E0306 01:48:16.882461 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-sf89w_calico-system(0b8104ea-0f2b-4826-8b83-6a37cdde3bc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-sf89w_calico-system(0b8104ea-0f2b-4826-8b83-6a37cdde3bc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-sf89w" podUID="0b8104ea-0f2b-4826-8b83-6a37cdde3bc1" Mar 6 01:48:16.887271 containerd[1454]: time="2026-03-06T01:48:16.886895714Z" level=error msg="Failed to destroy network for sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.887717 containerd[1454]: time="2026-03-06T01:48:16.887612852Z" level=error msg="encountered an error cleaning up failed sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.887717 containerd[1454]: time="2026-03-06T01:48:16.887681350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849779dc44-jsqkt,Uid:cc178117-4377-4398-8b69-5a7eb386dc85,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.887959 kubelet[2537]: E0306 01:48:16.887918 2537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:16.887996 kubelet[2537]: E0306 01:48:16.887969 2537 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-849779dc44-jsqkt" Mar 6 01:48:16.888056 kubelet[2537]: E0306 01:48:16.887986 2537 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-849779dc44-jsqkt" Mar 6 01:48:16.888207 kubelet[2537]: E0306 01:48:16.888104 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849779dc44-jsqkt_calico-system(cc178117-4377-4398-8b69-5a7eb386dc85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849779dc44-jsqkt_calico-system(cc178117-4377-4398-8b69-5a7eb386dc85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-849779dc44-jsqkt" podUID="cc178117-4377-4398-8b69-5a7eb386dc85" Mar 6 01:48:17.200403 kubelet[2537]: I0306 01:48:17.199940 2537 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:17.202881 kubelet[2537]: I0306 01:48:17.202612 2537 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:17.211383 kubelet[2537]: I0306 01:48:17.211078 2537 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:17.213018 kubelet[2537]: I0306 01:48:17.212997 2537 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:17.228761 containerd[1454]: time="2026-03-06T01:48:17.228311113Z" level=info msg="StopPodSandbox for \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\"" Mar 6 01:48:17.228761 containerd[1454]: time="2026-03-06T01:48:17.228406299Z" level=info msg="StopPodSandbox for \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\"" Mar 6 01:48:17.229080 containerd[1454]: time="2026-03-06T01:48:17.229044031Z" level=info msg="StopPodSandbox for \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\"" Mar 6 01:48:17.230441 containerd[1454]: time="2026-03-06T01:48:17.229768925Z" level=info msg="StopPodSandbox for \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\"" Mar 6 01:48:17.234547 containerd[1454]: time="2026-03-06T01:48:17.234468986Z" level=info msg="Ensure that sandbox afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2 in task-service has been cleanup successfully" Mar 6 01:48:17.235086 containerd[1454]: time="2026-03-06T01:48:17.234560664Z" level=info msg="Ensure that sandbox 56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133 in task-service has been cleanup successfully" Mar 6 01:48:17.235086 containerd[1454]: time="2026-03-06T01:48:17.234480910Z" level=info msg="Ensure that sandbox 027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9 in task-service has been cleanup successfully" Mar 6 01:48:17.235086 containerd[1454]: time="2026-03-06T01:48:17.234486719Z" level=info msg="Ensure that sandbox e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a in task-service has been cleanup successfully" Mar 6 01:48:17.245571 containerd[1454]: time="2026-03-06T01:48:17.245456822Z" level=info msg="CreateContainer within sandbox \"57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 01:48:17.246306 kubelet[2537]: I0306 01:48:17.245718 2537 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:17.246940 containerd[1454]: time="2026-03-06T01:48:17.246918762Z" level=info msg="StopPodSandbox for \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\"" Mar 6 01:48:17.249469 kubelet[2537]: I0306 01:48:17.248404 2537 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:17.249791 containerd[1454]: time="2026-03-06T01:48:17.249691225Z" level=info msg="StopPodSandbox for \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\"" Mar 6 01:48:17.251956 containerd[1454]: time="2026-03-06T01:48:17.250250779Z" level=info msg="Ensure that sandbox ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e in task-service has been cleanup successfully" Mar 6 01:48:17.253973 containerd[1454]: time="2026-03-06T01:48:17.253947429Z" level=info msg="Ensure that sandbox eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b in task-service has been cleanup successfully" Mar 6 01:48:17.256642 kubelet[2537]: I0306 01:48:17.256462 2537 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:17.258490 containerd[1454]: time="2026-03-06T01:48:17.258424962Z" level=info msg="StopPodSandbox for \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\"" Mar 6 01:48:17.258619 containerd[1454]: time="2026-03-06T01:48:17.258595801Z" level=info msg="Ensure that sandbox 155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c in task-service has been cleanup successfully" Mar 6 01:48:17.288377 containerd[1454]: time="2026-03-06T01:48:17.288333343Z" level=info msg="CreateContainer within sandbox \"57b0a532705dbc86f4946bde3b2d290eee75776df955772fd7b2ac639a2439dd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"36d32d384116cf7c3aff4bea4f70fc288a8c12cf4c2037011ccf50fb65fbd515\"" Mar 6 01:48:17.291606 containerd[1454]: time="2026-03-06T01:48:17.291572837Z" level=info msg="StartContainer for \"36d32d384116cf7c3aff4bea4f70fc288a8c12cf4c2037011ccf50fb65fbd515\"" Mar 6 01:48:17.309217 containerd[1454]: time="2026-03-06T01:48:17.309045667Z" level=error msg="StopPodSandbox for \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\" failed" error="failed to destroy network for sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:17.309913 kubelet[2537]: E0306 01:48:17.309652 2537 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:17.309913 kubelet[2537]: E0306 01:48:17.309749 2537 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2"} Mar 6 01:48:17.309913 kubelet[2537]: E0306 01:48:17.309844 2537 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc178117-4377-4398-8b69-5a7eb386dc85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 6 01:48:17.309913 kubelet[2537]: E0306 01:48:17.309874 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc178117-4377-4398-8b69-5a7eb386dc85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-849779dc44-jsqkt" podUID="cc178117-4377-4398-8b69-5a7eb386dc85" Mar 6 01:48:17.333219 containerd[1454]: time="2026-03-06T01:48:17.332552496Z" level=error msg="StopPodSandbox for \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\" failed" error="failed to destroy network for sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:17.333304 kubelet[2537]: E0306 01:48:17.333039 2537 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:17.333304 kubelet[2537]: E0306 01:48:17.333071 2537 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9"} Mar 6 01:48:17.333304 kubelet[2537]: E0306 01:48:17.333096 2537 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7582639d-989e-494f-9494-c73a5ce2a100\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 6 01:48:17.333304 kubelet[2537]: E0306 01:48:17.333173 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7582639d-989e-494f-9494-c73a5ce2a100\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-6wdml" podUID="7582639d-989e-494f-9494-c73a5ce2a100" Mar 6 01:48:17.339564 containerd[1454]: time="2026-03-06T01:48:17.339467943Z" level=error msg="StopPodSandbox for \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\" failed" error="failed to destroy network for sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:17.340266 kubelet[2537]: E0306 01:48:17.340228 2537 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:17.340472 kubelet[2537]: E0306 01:48:17.340408 2537 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b"} Mar 6 01:48:17.340588 kubelet[2537]: E0306 01:48:17.340572 2537 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d6af042-9f63-4188-b4d2-c221e72cdd50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 6 01:48:17.340927 kubelet[2537]: E0306 01:48:17.340873 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d6af042-9f63-4188-b4d2-c221e72cdd50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-9lrv6" podUID="2d6af042-9f63-4188-b4d2-c221e72cdd50" Mar 6 01:48:17.356362 containerd[1454]: time="2026-03-06T01:48:17.355991949Z" level=error msg="StopPodSandbox for \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\" failed" error="failed to destroy network for sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:17.356440 kubelet[2537]: E0306 01:48:17.356228 2537 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:17.356440 kubelet[2537]: E0306 01:48:17.356261 2537 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133"} Mar 6 01:48:17.356440 kubelet[2537]: E0306 01:48:17.356285 2537 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 6 01:48:17.356440 kubelet[2537]: E0306 01:48:17.356311 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-849779dc44-7w6q8" podUID="e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1" Mar 6 01:48:17.362632 containerd[1454]: time="2026-03-06T01:48:17.362410077Z" level=error msg="StopPodSandbox for \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\" failed" error="failed to destroy network for sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:17.362878 kubelet[2537]: E0306 01:48:17.362775 2537 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:17.362878 kubelet[2537]: E0306 01:48:17.362862 2537 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e"} Mar 6 01:48:17.362944 kubelet[2537]: E0306 01:48:17.362894 2537 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 6 01:48:17.362944 kubelet[2537]: E0306 01:48:17.362920 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-698654fb6d-bqsjg" podUID="bd707b61-d1cf-4e6f-8b10-15d152fccaef" Mar 6 01:48:17.363170 containerd[1454]: time="2026-03-06T01:48:17.362982688Z" level=error msg="StopPodSandbox for \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\" failed" error="failed to destroy network for sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:17.363303 kubelet[2537]: E0306 01:48:17.363248 2537 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:17.363303 kubelet[2537]: E0306 01:48:17.363276 2537 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a"} Mar 6 01:48:17.363303 kubelet[2537]: E0306 01:48:17.363299 2537 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b8104ea-0f2b-4826-8b83-6a37cdde3bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 6 01:48:17.363303 kubelet[2537]: E0306 01:48:17.363318 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b8104ea-0f2b-4826-8b83-6a37cdde3bc1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-sf89w" podUID="0b8104ea-0f2b-4826-8b83-6a37cdde3bc1" Mar 6 01:48:17.366222 containerd[1454]: time="2026-03-06T01:48:17.365733733Z" level=error msg="StopPodSandbox for \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\" failed" error="failed to destroy network for sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:48:17.366277 kubelet[2537]: E0306 01:48:17.365940 2537 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:17.366277 kubelet[2537]: E0306 01:48:17.366023 2537 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c"} Mar 6 01:48:17.366277 kubelet[2537]: E0306 01:48:17.366042 2537 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17d574a6-9dbc-4f4c-a51a-e2c93d76716b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 6 01:48:17.366277 kubelet[2537]: E0306 01:48:17.366061 2537 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17d574a6-9dbc-4f4c-a51a-e2c93d76716b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f669bb65f-fqbtc" podUID="17d574a6-9dbc-4f4c-a51a-e2c93d76716b" Mar 6 01:48:17.383421 systemd[1]: Started cri-containerd-36d32d384116cf7c3aff4bea4f70fc288a8c12cf4c2037011ccf50fb65fbd515.scope - libcontainer container 36d32d384116cf7c3aff4bea4f70fc288a8c12cf4c2037011ccf50fb65fbd515. Mar 6 01:48:17.422899 containerd[1454]: time="2026-03-06T01:48:17.422754198Z" level=info msg="StartContainer for \"36d32d384116cf7c3aff4bea4f70fc288a8c12cf4c2037011ccf50fb65fbd515\" returns successfully" Mar 6 01:48:18.047352 systemd[1]: Created slice kubepods-besteffort-pod32f49841_9b6b_4b9e_9ec2_2a6e7cda3f0d.slice - libcontainer container kubepods-besteffort-pod32f49841_9b6b_4b9e_9ec2_2a6e7cda3f0d.slice. Mar 6 01:48:18.053700 containerd[1454]: time="2026-03-06T01:48:18.053357115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wrl2h,Uid:32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d,Namespace:calico-system,Attempt:0,}" Mar 6 01:48:18.216395 systemd-networkd[1374]: cali88b8dd080c5: Link UP Mar 6 01:48:18.216727 systemd-networkd[1374]: cali88b8dd080c5: Gained carrier Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.099 [ERROR][3867] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.121 [INFO][3867] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wrl2h-eth0 csi-node-driver- calico-system 32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d 753 0 2026-03-06 01:48:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wrl2h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali88b8dd080c5 [] [] }} ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Namespace="calico-system" Pod="csi-node-driver-wrl2h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wrl2h-" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.121 [INFO][3867] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Namespace="calico-system" Pod="csi-node-driver-wrl2h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wrl2h-eth0" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.159 [INFO][3882] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" HandleID="k8s-pod-network.405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Workload="localhost-k8s-csi--node--driver--wrl2h-eth0" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.165 [INFO][3882] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" HandleID="k8s-pod-network.405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Workload="localhost-k8s-csi--node--driver--wrl2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wrl2h", "timestamp":"2026-03-06 01:48:18.159044729 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00003adc0)} Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.165 [INFO][3882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.165 [INFO][3882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.166 [INFO][3882] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.169 [INFO][3882] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" host="localhost" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.174 [INFO][3882] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.180 [INFO][3882] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.182 [INFO][3882] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.184 [INFO][3882] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.184 [INFO][3882] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" host="localhost" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.186 [INFO][3882] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.192 [INFO][3882] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" host="localhost" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.197 [INFO][3882] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" host="localhost" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.197 [INFO][3882] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" host="localhost" Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.197 [INFO][3882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:18.229600 containerd[1454]: 2026-03-06 01:48:18.197 [INFO][3882] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" HandleID="k8s-pod-network.405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Workload="localhost-k8s-csi--node--driver--wrl2h-eth0" Mar 6 01:48:18.230273 containerd[1454]: 2026-03-06 01:48:18.203 [INFO][3867] cni-plugin/k8s.go 418: Populated endpoint ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Namespace="calico-system" Pod="csi-node-driver-wrl2h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wrl2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wrl2h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wrl2h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali88b8dd080c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:18.230273 containerd[1454]: 2026-03-06 01:48:18.203 [INFO][3867] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Namespace="calico-system" Pod="csi-node-driver-wrl2h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wrl2h-eth0" Mar 6 01:48:18.230273 containerd[1454]: 2026-03-06 01:48:18.203 [INFO][3867] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88b8dd080c5 ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Namespace="calico-system" Pod="csi-node-driver-wrl2h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wrl2h-eth0" Mar 6 01:48:18.230273 containerd[1454]: 2026-03-06 01:48:18.215 [INFO][3867] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Namespace="calico-system" Pod="csi-node-driver-wrl2h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wrl2h-eth0" Mar 6 01:48:18.230273 containerd[1454]: 2026-03-06 01:48:18.215 [INFO][3867] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Namespace="calico-system" Pod="csi-node-driver-wrl2h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wrl2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wrl2h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc", Pod:"csi-node-driver-wrl2h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali88b8dd080c5", MAC:"be:bb:f4:40:c1:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:18.230273 containerd[1454]: 2026-03-06 01:48:18.225 [INFO][3867] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc" Namespace="calico-system" Pod="csi-node-driver-wrl2h" WorkloadEndpoint="localhost-k8s-csi--node--driver--wrl2h-eth0" Mar 6 01:48:18.256668 containerd[1454]: time="2026-03-06T01:48:18.256350609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:48:18.256668 containerd[1454]: time="2026-03-06T01:48:18.256475744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:48:18.256668 containerd[1454]: time="2026-03-06T01:48:18.256490841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:18.256896 containerd[1454]: time="2026-03-06T01:48:18.256586019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:18.263780 containerd[1454]: time="2026-03-06T01:48:18.263752511Z" level=info msg="StopPodSandbox for \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\"" Mar 6 01:48:18.295192 systemd[1]: Started cri-containerd-405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc.scope - libcontainer container 405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc. Mar 6 01:48:18.320461 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:48:18.344553 containerd[1454]: time="2026-03-06T01:48:18.344474308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wrl2h,Uid:32f49841-9b6b-4b9e-9ec2-2a6e7cda3f0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc\"" Mar 6 01:48:18.348443 containerd[1454]: time="2026-03-06T01:48:18.348341687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 01:48:18.362426 kubelet[2537]: I0306 01:48:18.362201 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-26pps" podStartSLOduration=2.07652101 podStartE2EDuration="16.36217986s" podCreationTimestamp="2026-03-06 01:48:02 +0000 UTC" firstStartedPulling="2026-03-06 01:48:02.923884688 +0000 UTC m=+17.033675804" lastFinishedPulling="2026-03-06 01:48:17.209543538 +0000 UTC m=+31.319334654" observedRunningTime="2026-03-06 01:48:18.300275729 +0000 UTC m=+32.410066845" watchObservedRunningTime="2026-03-06 01:48:18.36217986 +0000 UTC m=+32.471970976" Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.361 [INFO][3938] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.363 [INFO][3938] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" iface="eth0" netns="/var/run/netns/cni-14bd09af-8635-6669-07c2-7ca707c0820a" Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.363 [INFO][3938] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" iface="eth0" netns="/var/run/netns/cni-14bd09af-8635-6669-07c2-7ca707c0820a" Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.363 [INFO][3938] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" iface="eth0" netns="/var/run/netns/cni-14bd09af-8635-6669-07c2-7ca707c0820a" Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.363 [INFO][3938] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.363 [INFO][3938] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.394 [INFO][3960] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" HandleID="k8s-pod-network.ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Workload="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.394 [INFO][3960] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.394 [INFO][3960] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.402 [WARNING][3960] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" HandleID="k8s-pod-network.ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Workload="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.402 [INFO][3960] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" HandleID="k8s-pod-network.ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Workload="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.404 [INFO][3960] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:18.412188 containerd[1454]: 2026-03-06 01:48:18.409 [INFO][3938] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:18.412730 containerd[1454]: time="2026-03-06T01:48:18.412493612Z" level=info msg="TearDown network for sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\" successfully" Mar 6 01:48:18.412730 containerd[1454]: time="2026-03-06T01:48:18.412520983Z" level=info msg="StopPodSandbox for \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\" returns successfully" Mar 6 01:48:18.415445 systemd[1]: run-netns-cni\x2d14bd09af\x2d8635\x2d6669\x2d07c2\x2d7ca707c0820a.mount: Deactivated successfully. Mar 6 01:48:18.440592 kubelet[2537]: I0306 01:48:18.440456 2537 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/bd707b61-d1cf-4e6f-8b10-15d152fccaef-nginx-config\" (UniqueName: \"kubernetes.io/configmap/bd707b61-d1cf-4e6f-8b10-15d152fccaef-nginx-config\") pod \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\" (UID: \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\") " Mar 6 01:48:18.440592 kubelet[2537]: I0306 01:48:18.440531 2537 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/bd707b61-d1cf-4e6f-8b10-15d152fccaef-kube-api-access-z5gfx\" (UniqueName: \"kubernetes.io/projected/bd707b61-d1cf-4e6f-8b10-15d152fccaef-kube-api-access-z5gfx\") pod \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\" (UID: \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\") " Mar 6 01:48:18.440592 kubelet[2537]: I0306 01:48:18.440564 2537 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/bd707b61-d1cf-4e6f-8b10-15d152fccaef-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd707b61-d1cf-4e6f-8b10-15d152fccaef-whisker-ca-bundle\") pod \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\" (UID: \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\") " Mar 6 01:48:18.440592 kubelet[2537]: I0306 01:48:18.440583 2537 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/bd707b61-d1cf-4e6f-8b10-15d152fccaef-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd707b61-d1cf-4e6f-8b10-15d152fccaef-whisker-backend-key-pair\") pod \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\" (UID: \"bd707b61-d1cf-4e6f-8b10-15d152fccaef\") " Mar 6 01:48:18.441429 kubelet[2537]: I0306 01:48:18.441397 2537 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd707b61-d1cf-4e6f-8b10-15d152fccaef-whisker-ca-bundle" pod "bd707b61-d1cf-4e6f-8b10-15d152fccaef" (UID: "bd707b61-d1cf-4e6f-8b10-15d152fccaef"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:48:18.442027 kubelet[2537]: I0306 01:48:18.441850 2537 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd707b61-d1cf-4e6f-8b10-15d152fccaef-nginx-config" pod "bd707b61-d1cf-4e6f-8b10-15d152fccaef" (UID: "bd707b61-d1cf-4e6f-8b10-15d152fccaef"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:48:18.445709 kubelet[2537]: I0306 01:48:18.445608 2537 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd707b61-d1cf-4e6f-8b10-15d152fccaef-whisker-backend-key-pair" pod "bd707b61-d1cf-4e6f-8b10-15d152fccaef" (UID: "bd707b61-d1cf-4e6f-8b10-15d152fccaef"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 01:48:18.448020 systemd[1]: var-lib-kubelet-pods-bd707b61\x2dd1cf\x2d4e6f\x2d8b10\x2d15d152fccaef-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 01:48:18.449325 kubelet[2537]: I0306 01:48:18.449252 2537 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd707b61-d1cf-4e6f-8b10-15d152fccaef-kube-api-access-z5gfx" pod "bd707b61-d1cf-4e6f-8b10-15d152fccaef" (UID: "bd707b61-d1cf-4e6f-8b10-15d152fccaef"). InnerVolumeSpecName "kube-api-access-z5gfx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 01:48:18.451028 systemd[1]: var-lib-kubelet-pods-bd707b61\x2dd1cf\x2d4e6f\x2d8b10\x2d15d152fccaef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz5gfx.mount: Deactivated successfully. Mar 6 01:48:18.541599 kubelet[2537]: I0306 01:48:18.541486 2537 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5gfx\" (UniqueName: \"kubernetes.io/projected/bd707b61-d1cf-4e6f-8b10-15d152fccaef-kube-api-access-z5gfx\") on node \"localhost\" DevicePath \"\"" Mar 6 01:48:18.541599 kubelet[2537]: I0306 01:48:18.541541 2537 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd707b61-d1cf-4e6f-8b10-15d152fccaef-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 6 01:48:18.541599 kubelet[2537]: I0306 01:48:18.541551 2537 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd707b61-d1cf-4e6f-8b10-15d152fccaef-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 6 01:48:18.541599 kubelet[2537]: I0306 01:48:18.541560 2537 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/bd707b61-d1cf-4e6f-8b10-15d152fccaef-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 6 01:48:18.960341 containerd[1454]: time="2026-03-06T01:48:18.959270879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:18.961420 containerd[1454]: time="2026-03-06T01:48:18.961333779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 6 01:48:18.963880 containerd[1454]: time="2026-03-06T01:48:18.963786356Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:18.969546 containerd[1454]: time="2026-03-06T01:48:18.969465005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:18.971526 containerd[1454]: time="2026-03-06T01:48:18.971437026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 623.041538ms" Mar 6 01:48:18.971526 containerd[1454]: time="2026-03-06T01:48:18.971518076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 6 01:48:18.983714 containerd[1454]: time="2026-03-06T01:48:18.983636777Z" level=info msg="CreateContainer within sandbox \"405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 01:48:19.011706 containerd[1454]: time="2026-03-06T01:48:19.011391437Z" level=info msg="CreateContainer within sandbox \"405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5dab8696fa8047177f9435bf3070781294fc9e9d260b2fd2b2eb291f19904330\"" Mar 6 01:48:19.013428 containerd[1454]: time="2026-03-06T01:48:19.013098400Z" level=info msg="StartContainer for \"5dab8696fa8047177f9435bf3070781294fc9e9d260b2fd2b2eb291f19904330\"" Mar 6 01:48:19.086404 systemd[1]: Started cri-containerd-5dab8696fa8047177f9435bf3070781294fc9e9d260b2fd2b2eb291f19904330.scope - libcontainer container 5dab8696fa8047177f9435bf3070781294fc9e9d260b2fd2b2eb291f19904330. Mar 6 01:48:19.156797 containerd[1454]: time="2026-03-06T01:48:19.156641169Z" level=info msg="StartContainer for \"5dab8696fa8047177f9435bf3070781294fc9e9d260b2fd2b2eb291f19904330\" returns successfully" Mar 6 01:48:19.160568 containerd[1454]: time="2026-03-06T01:48:19.160020647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 01:48:19.272505 kubelet[2537]: I0306 01:48:19.272416 2537 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:48:19.280227 systemd[1]: Removed slice kubepods-besteffort-podbd707b61_d1cf_4e6f_8b10_15d152fccaef.slice - libcontainer container kubepods-besteffort-podbd707b61_d1cf_4e6f_8b10_15d152fccaef.slice. Mar 6 01:48:19.355736 systemd[1]: Created slice kubepods-besteffort-pod5d6874c3_5e55_4e14_9124_51e622c7b507.slice - libcontainer container kubepods-besteffort-pod5d6874c3_5e55_4e14_9124_51e622c7b507.slice. Mar 6 01:48:19.421194 kernel: calico-node[4019]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 6 01:48:19.450175 kubelet[2537]: I0306 01:48:19.447715 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndz4r\" (UniqueName: \"kubernetes.io/projected/5d6874c3-5e55-4e14-9124-51e622c7b507-kube-api-access-ndz4r\") pod \"whisker-5bd9f68d65-srctg\" (UID: \"5d6874c3-5e55-4e14-9124-51e622c7b507\") " pod="calico-system/whisker-5bd9f68d65-srctg" Mar 6 01:48:19.450175 kubelet[2537]: I0306 01:48:19.447761 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5d6874c3-5e55-4e14-9124-51e622c7b507-whisker-backend-key-pair\") pod \"whisker-5bd9f68d65-srctg\" (UID: \"5d6874c3-5e55-4e14-9124-51e622c7b507\") " pod="calico-system/whisker-5bd9f68d65-srctg" Mar 6 01:48:19.450175 kubelet[2537]: I0306 01:48:19.447785 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d6874c3-5e55-4e14-9124-51e622c7b507-whisker-ca-bundle\") pod \"whisker-5bd9f68d65-srctg\" (UID: \"5d6874c3-5e55-4e14-9124-51e622c7b507\") " pod="calico-system/whisker-5bd9f68d65-srctg" Mar 6 01:48:19.450175 kubelet[2537]: I0306 01:48:19.447835 2537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5d6874c3-5e55-4e14-9124-51e622c7b507-nginx-config\") pod \"whisker-5bd9f68d65-srctg\" (UID: \"5d6874c3-5e55-4e14-9124-51e622c7b507\") " pod="calico-system/whisker-5bd9f68d65-srctg" Mar 6 01:48:19.682221 containerd[1454]: time="2026-03-06T01:48:19.681993522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bd9f68d65-srctg,Uid:5d6874c3-5e55-4e14-9124-51e622c7b507,Namespace:calico-system,Attempt:0,}" Mar 6 01:48:19.871920 systemd-networkd[1374]: cali259a01bccd0: Link UP Mar 6 01:48:19.873422 systemd-networkd[1374]: cali259a01bccd0: Gained carrier Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.778 [INFO][4158] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5bd9f68d65--srctg-eth0 whisker-5bd9f68d65- calico-system 5d6874c3-5e55-4e14-9124-51e622c7b507 972 0 2026-03-06 01:48:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5bd9f68d65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5bd9f68d65-srctg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali259a01bccd0 [] [] }} ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Namespace="calico-system" Pod="whisker-5bd9f68d65-srctg" WorkloadEndpoint="localhost-k8s-whisker--5bd9f68d65--srctg-" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.778 [INFO][4158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Namespace="calico-system" Pod="whisker-5bd9f68d65-srctg" WorkloadEndpoint="localhost-k8s-whisker--5bd9f68d65--srctg-eth0" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.813 [INFO][4175] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" HandleID="k8s-pod-network.ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Workload="localhost-k8s-whisker--5bd9f68d65--srctg-eth0" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.822 [INFO][4175] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" HandleID="k8s-pod-network.ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Workload="localhost-k8s-whisker--5bd9f68d65--srctg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e970), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5bd9f68d65-srctg", "timestamp":"2026-03-06 01:48:19.813224986 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002471e0)} Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.822 [INFO][4175] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.822 [INFO][4175] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.822 [INFO][4175] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.826 [INFO][4175] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" host="localhost" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.832 [INFO][4175] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.838 [INFO][4175] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.841 [INFO][4175] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.844 [INFO][4175] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.844 [INFO][4175] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" host="localhost" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.846 [INFO][4175] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327 Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.853 [INFO][4175] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" host="localhost" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.859 [INFO][4175] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" host="localhost" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.859 [INFO][4175] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" host="localhost" Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.859 [INFO][4175] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:19.894285 containerd[1454]: 2026-03-06 01:48:19.859 [INFO][4175] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" HandleID="k8s-pod-network.ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Workload="localhost-k8s-whisker--5bd9f68d65--srctg-eth0" Mar 6 01:48:19.895647 containerd[1454]: 2026-03-06 01:48:19.865 [INFO][4158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Namespace="calico-system" Pod="whisker-5bd9f68d65-srctg" WorkloadEndpoint="localhost-k8s-whisker--5bd9f68d65--srctg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5bd9f68d65--srctg-eth0", GenerateName:"whisker-5bd9f68d65-", Namespace:"calico-system", SelfLink:"", UID:"5d6874c3-5e55-4e14-9124-51e622c7b507", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bd9f68d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5bd9f68d65-srctg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali259a01bccd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:19.895647 containerd[1454]: 2026-03-06 01:48:19.866 [INFO][4158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Namespace="calico-system" Pod="whisker-5bd9f68d65-srctg" WorkloadEndpoint="localhost-k8s-whisker--5bd9f68d65--srctg-eth0" Mar 6 01:48:19.895647 containerd[1454]: 2026-03-06 01:48:19.866 [INFO][4158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali259a01bccd0 ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Namespace="calico-system" Pod="whisker-5bd9f68d65-srctg" WorkloadEndpoint="localhost-k8s-whisker--5bd9f68d65--srctg-eth0" Mar 6 01:48:19.895647 containerd[1454]: 2026-03-06 01:48:19.875 [INFO][4158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Namespace="calico-system" Pod="whisker-5bd9f68d65-srctg" WorkloadEndpoint="localhost-k8s-whisker--5bd9f68d65--srctg-eth0" Mar 6 01:48:19.895647 containerd[1454]: 2026-03-06 01:48:19.876 [INFO][4158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Namespace="calico-system" Pod="whisker-5bd9f68d65-srctg" WorkloadEndpoint="localhost-k8s-whisker--5bd9f68d65--srctg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5bd9f68d65--srctg-eth0", GenerateName:"whisker-5bd9f68d65-", Namespace:"calico-system", SelfLink:"", UID:"5d6874c3-5e55-4e14-9124-51e622c7b507", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bd9f68d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327", Pod:"whisker-5bd9f68d65-srctg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali259a01bccd0", MAC:"b2:bd:08:92:67:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:19.895647 containerd[1454]: 2026-03-06 01:48:19.887 [INFO][4158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327" Namespace="calico-system" Pod="whisker-5bd9f68d65-srctg" WorkloadEndpoint="localhost-k8s-whisker--5bd9f68d65--srctg-eth0" Mar 6 01:48:19.926537 containerd[1454]: time="2026-03-06T01:48:19.926027588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:48:19.926537 containerd[1454]: time="2026-03-06T01:48:19.926077421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:48:19.926537 containerd[1454]: time="2026-03-06T01:48:19.926087570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:19.926537 containerd[1454]: time="2026-03-06T01:48:19.926287021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:19.931482 systemd-networkd[1374]: cali88b8dd080c5: Gained IPv6LL Mar 6 01:48:19.966335 systemd[1]: Started cri-containerd-ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327.scope - libcontainer container ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327. Mar 6 01:48:19.992899 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:48:20.034243 containerd[1454]: time="2026-03-06T01:48:20.034044783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bd9f68d65-srctg,Uid:5d6874c3-5e55-4e14-9124-51e622c7b507,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327\"" Mar 6 01:48:20.041754 kubelet[2537]: I0306 01:48:20.041532 2537 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bd707b61-d1cf-4e6f-8b10-15d152fccaef" path="/var/lib/kubelet/pods/bd707b61-d1cf-4e6f-8b10-15d152fccaef/volumes" Mar 6 01:48:20.045219 containerd[1454]: time="2026-03-06T01:48:20.044994173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:20.046500 containerd[1454]: time="2026-03-06T01:48:20.046415356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 6 01:48:20.047899 containerd[1454]: time="2026-03-06T01:48:20.047851206Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:20.052007 containerd[1454]: time="2026-03-06T01:48:20.051980421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:20.055425 containerd[1454]: time="2026-03-06T01:48:20.055331689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 895.278972ms" Mar 6 01:48:20.055425 containerd[1454]: time="2026-03-06T01:48:20.055378095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 6 01:48:20.057057 containerd[1454]: time="2026-03-06T01:48:20.056978318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 01:48:20.062195 containerd[1454]: time="2026-03-06T01:48:20.062077359Z" level=info msg="CreateContainer within sandbox \"405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 01:48:20.083614 containerd[1454]: time="2026-03-06T01:48:20.083417155Z" level=info msg="CreateContainer within sandbox \"405db820a25a088ed949a4ea9a2634a19d9b3f456c988c2527be49e1484a0cfc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e726ea2ab1066d89d0d8d4db51e4ae8c546849fc27d301f61a7688c6568b272\"" Mar 6 01:48:20.086097 containerd[1454]: time="2026-03-06T01:48:20.086027670Z" level=info msg="StartContainer for \"8e726ea2ab1066d89d0d8d4db51e4ae8c546849fc27d301f61a7688c6568b272\"" Mar 6 01:48:20.091183 systemd-networkd[1374]: vxlan.calico: Link UP Mar 6 01:48:20.091197 systemd-networkd[1374]: vxlan.calico: Gained carrier Mar 6 01:48:20.146406 systemd[1]: Started cri-containerd-8e726ea2ab1066d89d0d8d4db51e4ae8c546849fc27d301f61a7688c6568b272.scope - libcontainer container 8e726ea2ab1066d89d0d8d4db51e4ae8c546849fc27d301f61a7688c6568b272. Mar 6 01:48:20.192482 containerd[1454]: time="2026-03-06T01:48:20.192381122Z" level=info msg="StartContainer for \"8e726ea2ab1066d89d0d8d4db51e4ae8c546849fc27d301f61a7688c6568b272\" returns successfully" Mar 6 01:48:20.296500 kubelet[2537]: I0306 01:48:20.294953 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-wrl2h" podStartSLOduration=16.586219827 podStartE2EDuration="18.29493546s" podCreationTimestamp="2026-03-06 01:48:02 +0000 UTC" firstStartedPulling="2026-03-06 01:48:18.347485889 +0000 UTC m=+32.457277005" lastFinishedPulling="2026-03-06 01:48:20.056201522 +0000 UTC m=+34.165992638" observedRunningTime="2026-03-06 01:48:20.292713322 +0000 UTC m=+34.402504437" watchObservedRunningTime="2026-03-06 01:48:20.29493546 +0000 UTC m=+34.404726577" Mar 6 01:48:20.600668 containerd[1454]: time="2026-03-06T01:48:20.600506548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:20.601416 containerd[1454]: time="2026-03-06T01:48:20.601343756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 6 01:48:20.602769 containerd[1454]: time="2026-03-06T01:48:20.602715562Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:20.605790 containerd[1454]: time="2026-03-06T01:48:20.605707982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:20.606727 containerd[1454]: time="2026-03-06T01:48:20.606663549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 549.623786ms" Mar 6 01:48:20.606727 containerd[1454]: time="2026-03-06T01:48:20.606714805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 6 01:48:20.617049 containerd[1454]: time="2026-03-06T01:48:20.616985911Z" level=info msg="CreateContainer within sandbox \"ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 01:48:20.637292 containerd[1454]: time="2026-03-06T01:48:20.637203887Z" level=info msg="CreateContainer within sandbox \"ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3b9900eb6ce7e6802a8b650d0de9d71e8ede2e2eae3c4a796d92739c85ba9cf2\"" Mar 6 01:48:20.637953 containerd[1454]: time="2026-03-06T01:48:20.637883644Z" level=info msg="StartContainer for \"3b9900eb6ce7e6802a8b650d0de9d71e8ede2e2eae3c4a796d92739c85ba9cf2\"" Mar 6 01:48:20.690410 systemd[1]: Started cri-containerd-3b9900eb6ce7e6802a8b650d0de9d71e8ede2e2eae3c4a796d92739c85ba9cf2.scope - libcontainer container 3b9900eb6ce7e6802a8b650d0de9d71e8ede2e2eae3c4a796d92739c85ba9cf2. Mar 6 01:48:20.738967 containerd[1454]: time="2026-03-06T01:48:20.738888082Z" level=info msg="StartContainer for \"3b9900eb6ce7e6802a8b650d0de9d71e8ede2e2eae3c4a796d92739c85ba9cf2\" returns successfully" Mar 6 01:48:20.741108 containerd[1454]: time="2026-03-06T01:48:20.741004148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 01:48:21.108061 kubelet[2537]: I0306 01:48:21.108001 2537 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 01:48:21.108061 kubelet[2537]: I0306 01:48:21.108033 2537 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 01:48:21.334240 systemd[1]: run-containerd-runc-k8s.io-3b9900eb6ce7e6802a8b650d0de9d71e8ede2e2eae3c4a796d92739c85ba9cf2-runc.J0Sj1G.mount: Deactivated successfully. Mar 6 01:48:21.463869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701026396.mount: Deactivated successfully. Mar 6 01:48:21.485861 containerd[1454]: time="2026-03-06T01:48:21.485745085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:21.486851 containerd[1454]: time="2026-03-06T01:48:21.486754675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 6 01:48:21.488227 containerd[1454]: time="2026-03-06T01:48:21.488170237Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:21.491323 containerd[1454]: time="2026-03-06T01:48:21.491277832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:21.492334 containerd[1454]: time="2026-03-06T01:48:21.492219269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 751.156612ms" Mar 6 01:48:21.492334 containerd[1454]: time="2026-03-06T01:48:21.492268991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 6 01:48:21.498305 containerd[1454]: time="2026-03-06T01:48:21.498260922Z" level=info msg="CreateContainer within sandbox \"ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 01:48:21.514755 containerd[1454]: time="2026-03-06T01:48:21.514701222Z" level=info msg="CreateContainer within sandbox \"ce5b91de4eb235847c50e2fc4298adcb329153dbf734745b898d6348f0769327\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"40b49f6c09cfae8a60d45b2a745a094248ad49098640e237f269d7f5d0d70618\"" Mar 6 01:48:21.516970 containerd[1454]: time="2026-03-06T01:48:21.515717330Z" level=info msg="StartContainer for \"40b49f6c09cfae8a60d45b2a745a094248ad49098640e237f269d7f5d0d70618\"" Mar 6 01:48:21.559395 systemd[1]: Started cri-containerd-40b49f6c09cfae8a60d45b2a745a094248ad49098640e237f269d7f5d0d70618.scope - libcontainer container 40b49f6c09cfae8a60d45b2a745a094248ad49098640e237f269d7f5d0d70618. Mar 6 01:48:21.594449 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Mar 6 01:48:21.611372 containerd[1454]: time="2026-03-06T01:48:21.611228365Z" level=info msg="StartContainer for \"40b49f6c09cfae8a60d45b2a745a094248ad49098640e237f269d7f5d0d70618\" returns successfully" Mar 6 01:48:21.850880 systemd-networkd[1374]: cali259a01bccd0: Gained IPv6LL Mar 6 01:48:21.903796 kubelet[2537]: I0306 01:48:21.903585 2537 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:48:22.305708 kubelet[2537]: I0306 01:48:22.305624 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-5bd9f68d65-srctg" podStartSLOduration=1.849359879 podStartE2EDuration="3.305612657s" podCreationTimestamp="2026-03-06 01:48:19 +0000 UTC" firstStartedPulling="2026-03-06 01:48:20.036953994 +0000 UTC m=+34.146745110" lastFinishedPulling="2026-03-06 01:48:21.493206771 +0000 UTC m=+35.602997888" observedRunningTime="2026-03-06 01:48:22.304278788 +0000 UTC m=+36.414069904" watchObservedRunningTime="2026-03-06 01:48:22.305612657 +0000 UTC m=+36.415403773" Mar 6 01:48:22.468593 systemd[1]: Started sshd@9-10.0.0.156:22-10.0.0.1:46904.service - OpenSSH per-connection server daemon (10.0.0.1:46904). Mar 6 01:48:22.548266 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 46904 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:22.548691 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:22.556571 systemd-logind[1442]: New session 10 of user core. Mar 6 01:48:22.566305 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 01:48:22.748355 sshd[4501]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:22.752334 systemd[1]: sshd@9-10.0.0.156:22-10.0.0.1:46904.service: Deactivated successfully. Mar 6 01:48:22.754715 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 01:48:22.757001 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Mar 6 01:48:22.759006 systemd-logind[1442]: Removed session 10. Mar 6 01:48:27.760180 systemd[1]: Started sshd@10-10.0.0.156:22-10.0.0.1:46920.service - OpenSSH per-connection server daemon (10.0.0.1:46920). Mar 6 01:48:27.797521 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 46920 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:27.799725 sshd[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:27.806737 systemd-logind[1442]: New session 11 of user core. Mar 6 01:48:27.815393 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 01:48:27.941552 sshd[4545]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:27.946658 systemd[1]: sshd@10-10.0.0.156:22-10.0.0.1:46920.service: Deactivated successfully. Mar 6 01:48:27.949402 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 01:48:27.950770 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Mar 6 01:48:27.952347 systemd-logind[1442]: Removed session 11. Mar 6 01:48:28.038835 containerd[1454]: time="2026-03-06T01:48:28.037744023Z" level=info msg="StopPodSandbox for \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\"" Mar 6 01:48:28.038835 containerd[1454]: time="2026-03-06T01:48:28.038273752Z" level=info msg="StopPodSandbox for \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\"" Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.106 [INFO][4583] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.106 [INFO][4583] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" iface="eth0" netns="/var/run/netns/cni-06723d55-db49-30ef-510e-2cc031852cd8" Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.108 [INFO][4583] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" iface="eth0" netns="/var/run/netns/cni-06723d55-db49-30ef-510e-2cc031852cd8" Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.108 [INFO][4583] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" iface="eth0" netns="/var/run/netns/cni-06723d55-db49-30ef-510e-2cc031852cd8" Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.108 [INFO][4583] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.109 [INFO][4583] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.134 [INFO][4604] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" HandleID="k8s-pod-network.afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.134 [INFO][4604] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.134 [INFO][4604] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.140 [WARNING][4604] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" HandleID="k8s-pod-network.afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.140 [INFO][4604] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" HandleID="k8s-pod-network.afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.142 [INFO][4604] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:28.147983 containerd[1454]: 2026-03-06 01:48:28.145 [INFO][4583] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:28.155483 containerd[1454]: time="2026-03-06T01:48:28.155424665Z" level=info msg="TearDown network for sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\" successfully" Mar 6 01:48:28.155714 containerd[1454]: time="2026-03-06T01:48:28.155560139Z" level=info msg="StopPodSandbox for \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\" returns successfully" Mar 6 01:48:28.156341 systemd[1]: run-netns-cni\x2d06723d55\x2ddb49\x2d30ef\x2d510e\x2d2cc031852cd8.mount: Deactivated successfully. Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.101 [INFO][4582] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.101 [INFO][4582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" iface="eth0" netns="/var/run/netns/cni-1beed3e0-6e3c-94e0-defe-06670e36953a" Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.102 [INFO][4582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" iface="eth0" netns="/var/run/netns/cni-1beed3e0-6e3c-94e0-defe-06670e36953a" Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.103 [INFO][4582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" iface="eth0" netns="/var/run/netns/cni-1beed3e0-6e3c-94e0-defe-06670e36953a" Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.103 [INFO][4582] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.103 [INFO][4582] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.141 [INFO][4597] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" HandleID="k8s-pod-network.eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.141 [INFO][4597] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.143 [INFO][4597] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.148 [WARNING][4597] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" HandleID="k8s-pod-network.eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.148 [INFO][4597] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" HandleID="k8s-pod-network.eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.150 [INFO][4597] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:28.157789 containerd[1454]: 2026-03-06 01:48:28.155 [INFO][4582] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:28.158877 containerd[1454]: time="2026-03-06T01:48:28.158724754Z" level=info msg="TearDown network for sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\" successfully" Mar 6 01:48:28.158877 containerd[1454]: time="2026-03-06T01:48:28.158766212Z" level=info msg="StopPodSandbox for \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\" returns successfully" Mar 6 01:48:28.160678 containerd[1454]: time="2026-03-06T01:48:28.160522641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849779dc44-jsqkt,Uid:cc178117-4377-4398-8b69-5a7eb386dc85,Namespace:calico-system,Attempt:1,}" Mar 6 01:48:28.161417 systemd[1]: run-netns-cni\x2d1beed3e0\x2d6e3c\x2d94e0\x2ddefe\x2d06670e36953a.mount: Deactivated successfully. Mar 6 01:48:28.164273 kubelet[2537]: E0306 01:48:28.164248 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:28.165320 containerd[1454]: time="2026-03-06T01:48:28.164577330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9lrv6,Uid:2d6af042-9f63-4188-b4d2-c221e72cdd50,Namespace:kube-system,Attempt:1,}" Mar 6 01:48:28.310335 systemd-networkd[1374]: cali90bf64f5d9f: Link UP Mar 6 01:48:28.311351 systemd-networkd[1374]: cali90bf64f5d9f: Gained carrier Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.227 [INFO][4615] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0 calico-apiserver-849779dc44- calico-system cc178117-4377-4398-8b69-5a7eb386dc85 1089 0 2026-03-06 01:48:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849779dc44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-849779dc44-jsqkt eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali90bf64f5d9f [] [] }} ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Namespace="calico-system" Pod="calico-apiserver-849779dc44-jsqkt" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--jsqkt-" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.227 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Namespace="calico-system" Pod="calico-apiserver-849779dc44-jsqkt" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.259 [INFO][4644] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" HandleID="k8s-pod-network.0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.267 [INFO][4644] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" HandleID="k8s-pod-network.0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-849779dc44-jsqkt", "timestamp":"2026-03-06 01:48:28.259572639 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00053f1e0)} Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.268 [INFO][4644] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.268 [INFO][4644] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.268 [INFO][4644] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.273 [INFO][4644] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" host="localhost" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.284 [INFO][4644] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.289 [INFO][4644] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.291 [INFO][4644] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.293 [INFO][4644] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.293 [INFO][4644] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" host="localhost" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.295 [INFO][4644] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.300 [INFO][4644] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" host="localhost" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.305 [INFO][4644] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" host="localhost" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.305 [INFO][4644] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" host="localhost" Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.305 [INFO][4644] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:28.326780 containerd[1454]: 2026-03-06 01:48:28.305 [INFO][4644] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" HandleID="k8s-pod-network.0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:28.330110 containerd[1454]: 2026-03-06 01:48:28.308 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Namespace="calico-system" Pod="calico-apiserver-849779dc44-jsqkt" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0", GenerateName:"calico-apiserver-849779dc44-", Namespace:"calico-system", SelfLink:"", UID:"cc178117-4377-4398-8b69-5a7eb386dc85", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849779dc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-849779dc44-jsqkt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali90bf64f5d9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:28.330110 containerd[1454]: 2026-03-06 01:48:28.308 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Namespace="calico-system" Pod="calico-apiserver-849779dc44-jsqkt" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:28.330110 containerd[1454]: 2026-03-06 01:48:28.308 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90bf64f5d9f ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Namespace="calico-system" Pod="calico-apiserver-849779dc44-jsqkt" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:28.330110 containerd[1454]: 2026-03-06 01:48:28.311 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Namespace="calico-system" Pod="calico-apiserver-849779dc44-jsqkt" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:28.330110 containerd[1454]: 2026-03-06 01:48:28.311 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Namespace="calico-system" Pod="calico-apiserver-849779dc44-jsqkt" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0", GenerateName:"calico-apiserver-849779dc44-", Namespace:"calico-system", SelfLink:"", UID:"cc178117-4377-4398-8b69-5a7eb386dc85", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849779dc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce", Pod:"calico-apiserver-849779dc44-jsqkt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali90bf64f5d9f", MAC:"1e:ff:2f:21:34:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:28.330110 containerd[1454]: 2026-03-06 01:48:28.323 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce" Namespace="calico-system" Pod="calico-apiserver-849779dc44-jsqkt" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:28.356681 containerd[1454]: time="2026-03-06T01:48:28.356424166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:48:28.356681 containerd[1454]: time="2026-03-06T01:48:28.356503515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:48:28.356681 containerd[1454]: time="2026-03-06T01:48:28.356532549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:28.358066 containerd[1454]: time="2026-03-06T01:48:28.357915722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:28.384412 systemd[1]: Started cri-containerd-0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce.scope - libcontainer container 0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce. Mar 6 01:48:28.400535 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:48:28.425242 systemd-networkd[1374]: calif31c1827652: Link UP Mar 6 01:48:28.427581 systemd-networkd[1374]: calif31c1827652: Gained carrier Mar 6 01:48:28.436420 containerd[1454]: time="2026-03-06T01:48:28.436387302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849779dc44-jsqkt,Uid:cc178117-4377-4398-8b69-5a7eb386dc85,Namespace:calico-system,Attempt:1,} returns sandbox id \"0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce\"" Mar 6 01:48:28.439355 containerd[1454]: time="2026-03-06T01:48:28.439329004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.232 [INFO][4626] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--9lrv6-eth0 coredns-7d764666f9- kube-system 2d6af042-9f63-4188-b4d2-c221e72cdd50 1088 0 2026-03-06 01:47:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-9lrv6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif31c1827652 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Namespace="kube-system" Pod="coredns-7d764666f9-9lrv6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9lrv6-" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.232 [INFO][4626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Namespace="kube-system" Pod="coredns-7d764666f9-9lrv6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.264 [INFO][4650] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" HandleID="k8s-pod-network.a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.270 [INFO][4650] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" HandleID="k8s-pod-network.a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f0350), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-9lrv6", "timestamp":"2026-03-06 01:48:28.264199984 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004031e0)} Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.270 [INFO][4650] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.305 [INFO][4650] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.305 [INFO][4650] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.373 [INFO][4650] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" host="localhost" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.384 [INFO][4650] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.391 [INFO][4650] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.393 [INFO][4650] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.399 [INFO][4650] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.399 [INFO][4650] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" host="localhost" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.402 [INFO][4650] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4 Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.407 [INFO][4650] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" host="localhost" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.415 [INFO][4650] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" host="localhost" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.415 [INFO][4650] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" host="localhost" Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.415 [INFO][4650] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:28.455413 containerd[1454]: 2026-03-06 01:48:28.415 [INFO][4650] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" HandleID="k8s-pod-network.a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:28.456352 containerd[1454]: 2026-03-06 01:48:28.419 [INFO][4626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Namespace="kube-system" Pod="coredns-7d764666f9-9lrv6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--9lrv6-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2d6af042-9f63-4188-b4d2-c221e72cdd50", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-9lrv6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif31c1827652", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:28.456352 containerd[1454]: 2026-03-06 01:48:28.419 [INFO][4626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Namespace="kube-system" Pod="coredns-7d764666f9-9lrv6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:28.456352 containerd[1454]: 2026-03-06 01:48:28.420 [INFO][4626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif31c1827652 ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Namespace="kube-system" Pod="coredns-7d764666f9-9lrv6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:28.456352 containerd[1454]: 2026-03-06 01:48:28.428 [INFO][4626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Namespace="kube-system" Pod="coredns-7d764666f9-9lrv6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:28.456352 containerd[1454]: 2026-03-06 01:48:28.429 [INFO][4626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Namespace="kube-system" Pod="coredns-7d764666f9-9lrv6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--9lrv6-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2d6af042-9f63-4188-b4d2-c221e72cdd50", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4", Pod:"coredns-7d764666f9-9lrv6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif31c1827652", MAC:"8e:84:95:c3:44:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:28.456352 containerd[1454]: 2026-03-06 01:48:28.448 [INFO][4626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4" Namespace="kube-system" Pod="coredns-7d764666f9-9lrv6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:28.484937 containerd[1454]: time="2026-03-06T01:48:28.484392990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:48:28.485639 containerd[1454]: time="2026-03-06T01:48:28.485567402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:48:28.485745 containerd[1454]: time="2026-03-06T01:48:28.485650617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:28.486105 containerd[1454]: time="2026-03-06T01:48:28.486024406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:28.510376 systemd[1]: Started cri-containerd-a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4.scope - libcontainer container a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4. Mar 6 01:48:28.532395 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:48:28.576570 containerd[1454]: time="2026-03-06T01:48:28.575905987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9lrv6,Uid:2d6af042-9f63-4188-b4d2-c221e72cdd50,Namespace:kube-system,Attempt:1,} returns sandbox id \"a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4\"" Mar 6 01:48:28.578277 kubelet[2537]: E0306 01:48:28.578025 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:28.597112 containerd[1454]: time="2026-03-06T01:48:28.597012649Z" level=info msg="CreateContainer within sandbox \"a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:48:28.619275 containerd[1454]: time="2026-03-06T01:48:28.619223052Z" level=info msg="CreateContainer within sandbox \"a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4348d411e52bf21c67d23f5eecbc6577f50b5c93b7ab1b53ad1bd871a6c2c1bd\"" Mar 6 01:48:28.620013 containerd[1454]: time="2026-03-06T01:48:28.619925692Z" level=info msg="StartContainer for \"4348d411e52bf21c67d23f5eecbc6577f50b5c93b7ab1b53ad1bd871a6c2c1bd\"" Mar 6 01:48:28.660464 systemd[1]: Started cri-containerd-4348d411e52bf21c67d23f5eecbc6577f50b5c93b7ab1b53ad1bd871a6c2c1bd.scope - libcontainer container 4348d411e52bf21c67d23f5eecbc6577f50b5c93b7ab1b53ad1bd871a6c2c1bd. Mar 6 01:48:28.696018 containerd[1454]: time="2026-03-06T01:48:28.695981547Z" level=info msg="StartContainer for \"4348d411e52bf21c67d23f5eecbc6577f50b5c93b7ab1b53ad1bd871a6c2c1bd\" returns successfully" Mar 6 01:48:29.037447 containerd[1454]: time="2026-03-06T01:48:29.037375718Z" level=info msg="StopPodSandbox for \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\"" Mar 6 01:48:29.038083 containerd[1454]: time="2026-03-06T01:48:29.037662992Z" level=info msg="StopPodSandbox for \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\"" Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.100 [INFO][4849] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.104 [INFO][4849] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" iface="eth0" netns="/var/run/netns/cni-810d04a8-8a74-f621-86b7-deb48e8df62a" Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.104 [INFO][4849] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" iface="eth0" netns="/var/run/netns/cni-810d04a8-8a74-f621-86b7-deb48e8df62a" Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.105 [INFO][4849] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" iface="eth0" netns="/var/run/netns/cni-810d04a8-8a74-f621-86b7-deb48e8df62a" Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.105 [INFO][4849] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.105 [INFO][4849] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.162 [INFO][4868] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" HandleID="k8s-pod-network.027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.162 [INFO][4868] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.162 [INFO][4868] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.171 [WARNING][4868] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" HandleID="k8s-pod-network.027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.171 [INFO][4868] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" HandleID="k8s-pod-network.027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.172 [INFO][4868] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:29.180391 containerd[1454]: 2026-03-06 01:48:29.175 [INFO][4849] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:29.193016 containerd[1454]: time="2026-03-06T01:48:29.192324684Z" level=info msg="TearDown network for sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\" successfully" Mar 6 01:48:29.193016 containerd[1454]: time="2026-03-06T01:48:29.192413390Z" level=info msg="StopPodSandbox for \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\" returns successfully" Mar 6 01:48:29.194386 systemd[1]: run-netns-cni\x2d810d04a8\x2d8a74\x2df621\x2d86b7\x2ddeb48e8df62a.mount: Deactivated successfully. Mar 6 01:48:29.198828 kubelet[2537]: E0306 01:48:29.198671 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:29.202637 containerd[1454]: time="2026-03-06T01:48:29.201966569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6wdml,Uid:7582639d-989e-494f-9494-c73a5ce2a100,Namespace:kube-system,Attempt:1,}" Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.102 [INFO][4848] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.103 [INFO][4848] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" iface="eth0" netns="/var/run/netns/cni-9cc75503-89be-e913-0708-d2fbaddb3817" Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.105 [INFO][4848] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" iface="eth0" netns="/var/run/netns/cni-9cc75503-89be-e913-0708-d2fbaddb3817" Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.105 [INFO][4848] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" iface="eth0" netns="/var/run/netns/cni-9cc75503-89be-e913-0708-d2fbaddb3817" Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.105 [INFO][4848] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.105 [INFO][4848] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.164 [INFO][4870] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" HandleID="k8s-pod-network.e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.165 [INFO][4870] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.172 [INFO][4870] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.180 [WARNING][4870] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" HandleID="k8s-pod-network.e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.181 [INFO][4870] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" HandleID="k8s-pod-network.e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.195 [INFO][4870] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:29.204110 containerd[1454]: 2026-03-06 01:48:29.199 [INFO][4848] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:29.205376 containerd[1454]: time="2026-03-06T01:48:29.205326212Z" level=info msg="TearDown network for sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\" successfully" Mar 6 01:48:29.205376 containerd[1454]: time="2026-03-06T01:48:29.205365596Z" level=info msg="StopPodSandbox for \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\" returns successfully" Mar 6 01:48:29.208942 systemd[1]: run-netns-cni\x2d9cc75503\x2d89be\x2de913\x2d0708\x2dd2fbaddb3817.mount: Deactivated successfully. Mar 6 01:48:29.210417 containerd[1454]: time="2026-03-06T01:48:29.209057631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-sf89w,Uid:0b8104ea-0f2b-4826-8b83-6a37cdde3bc1,Namespace:calico-system,Attempt:1,}" Mar 6 01:48:29.316184 kubelet[2537]: E0306 01:48:29.314655 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:29.350754 kubelet[2537]: I0306 01:48:29.350628 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9lrv6" podStartSLOduration=38.350551503 podStartE2EDuration="38.350551503s" podCreationTimestamp="2026-03-06 01:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:48:29.334401835 +0000 UTC m=+43.444192951" watchObservedRunningTime="2026-03-06 01:48:29.350551503 +0000 UTC m=+43.460342619" Mar 6 01:48:29.440445 systemd-networkd[1374]: cali9dfba2eaa4b: Link UP Mar 6 01:48:29.441720 systemd-networkd[1374]: cali9dfba2eaa4b: Gained carrier Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.278 [INFO][4885] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--6wdml-eth0 coredns-7d764666f9- kube-system 7582639d-989e-494f-9494-c73a5ce2a100 1105 0 2026-03-06 01:47:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-6wdml eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9dfba2eaa4b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Namespace="kube-system" Pod="coredns-7d764666f9-6wdml" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--6wdml-" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.279 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Namespace="kube-system" Pod="coredns-7d764666f9-6wdml" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.379 [INFO][4916] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" HandleID="k8s-pod-network.0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.388 [INFO][4916] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" HandleID="k8s-pod-network.0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000398770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-6wdml", "timestamp":"2026-03-06 01:48:29.379650833 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001402c0)} Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.388 [INFO][4916] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.388 [INFO][4916] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.388 [INFO][4916] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.391 [INFO][4916] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" host="localhost" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.397 [INFO][4916] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.404 [INFO][4916] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.407 [INFO][4916] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.409 [INFO][4916] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.409 [INFO][4916] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" host="localhost" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.412 [INFO][4916] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126 Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.419 [INFO][4916] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" host="localhost" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.428 [INFO][4916] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" host="localhost" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.428 [INFO][4916] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" host="localhost" Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.428 [INFO][4916] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:29.455278 containerd[1454]: 2026-03-06 01:48:29.428 [INFO][4916] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" HandleID="k8s-pod-network.0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:29.456091 containerd[1454]: 2026-03-06 01:48:29.431 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Namespace="kube-system" Pod="coredns-7d764666f9-6wdml" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--6wdml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--6wdml-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7582639d-989e-494f-9494-c73a5ce2a100", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-6wdml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9dfba2eaa4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:29.456091 containerd[1454]: 2026-03-06 01:48:29.432 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Namespace="kube-system" Pod="coredns-7d764666f9-6wdml" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:29.456091 containerd[1454]: 2026-03-06 01:48:29.432 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dfba2eaa4b ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Namespace="kube-system" Pod="coredns-7d764666f9-6wdml" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:29.456091 containerd[1454]: 2026-03-06 01:48:29.438 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Namespace="kube-system" Pod="coredns-7d764666f9-6wdml" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:29.456091 containerd[1454]: 2026-03-06 01:48:29.439 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Namespace="kube-system" Pod="coredns-7d764666f9-6wdml" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--6wdml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--6wdml-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7582639d-989e-494f-9494-c73a5ce2a100", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126", Pod:"coredns-7d764666f9-6wdml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9dfba2eaa4b", MAC:"32:fd:01:3b:84:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:29.456091 containerd[1454]: 2026-03-06 01:48:29.452 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126" Namespace="kube-system" Pod="coredns-7d764666f9-6wdml" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:29.509960 containerd[1454]: time="2026-03-06T01:48:29.509585677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:48:29.509960 containerd[1454]: time="2026-03-06T01:48:29.509689841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:48:29.509960 containerd[1454]: time="2026-03-06T01:48:29.509705651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:29.509960 containerd[1454]: time="2026-03-06T01:48:29.509897259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:29.544376 systemd-networkd[1374]: cali93c103cae43: Link UP Mar 6 01:48:29.548944 systemd-networkd[1374]: cali93c103cae43: Gained carrier Mar 6 01:48:29.555841 systemd[1]: Started cri-containerd-0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126.scope - libcontainer container 0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126. Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.297 [INFO][4904] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--sf89w-eth0 goldmane-9f7667bb8- calico-system 0b8104ea-0f2b-4826-8b83-6a37cdde3bc1 1106 0 2026-03-06 01:48:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-sf89w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali93c103cae43 [] [] }} ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sf89w" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sf89w-" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.297 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sf89w" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.399 [INFO][4923] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" HandleID="k8s-pod-network.6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.408 [INFO][4923] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" HandleID="k8s-pod-network.6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000341ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-sf89w", "timestamp":"2026-03-06 01:48:29.399914971 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005e9080)} Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.408 [INFO][4923] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.428 [INFO][4923] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.428 [INFO][4923] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.493 [INFO][4923] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" host="localhost" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.501 [INFO][4923] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.508 [INFO][4923] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.511 [INFO][4923] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.513 [INFO][4923] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.513 [INFO][4923] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" host="localhost" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.516 [INFO][4923] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8 Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.522 [INFO][4923] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" host="localhost" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.531 [INFO][4923] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" host="localhost" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.531 [INFO][4923] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" host="localhost" Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.532 [INFO][4923] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:29.567097 containerd[1454]: 2026-03-06 01:48:29.532 [INFO][4923] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" HandleID="k8s-pod-network.6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:29.567728 containerd[1454]: 2026-03-06 01:48:29.536 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sf89w" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--sf89w-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"0b8104ea-0f2b-4826-8b83-6a37cdde3bc1", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-sf89w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali93c103cae43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:29.567728 containerd[1454]: 2026-03-06 01:48:29.536 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sf89w" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:29.567728 containerd[1454]: 2026-03-06 01:48:29.536 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93c103cae43 ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sf89w" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:29.567728 containerd[1454]: 2026-03-06 01:48:29.549 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sf89w" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:29.567728 containerd[1454]: 2026-03-06 01:48:29.550 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sf89w" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--sf89w-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"0b8104ea-0f2b-4826-8b83-6a37cdde3bc1", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8", Pod:"goldmane-9f7667bb8-sf89w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali93c103cae43", MAC:"42:e8:9f:62:c9:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:29.567728 containerd[1454]: 2026-03-06 01:48:29.561 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8" Namespace="calico-system" Pod="goldmane-9f7667bb8-sf89w" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:29.585875 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:48:29.630718 containerd[1454]: time="2026-03-06T01:48:29.630381910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:48:29.630718 containerd[1454]: time="2026-03-06T01:48:29.630444306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:48:29.630718 containerd[1454]: time="2026-03-06T01:48:29.630457762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:29.630718 containerd[1454]: time="2026-03-06T01:48:29.630551246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:29.638917 containerd[1454]: time="2026-03-06T01:48:29.638877622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6wdml,Uid:7582639d-989e-494f-9494-c73a5ce2a100,Namespace:kube-system,Attempt:1,} returns sandbox id \"0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126\"" Mar 6 01:48:29.641009 kubelet[2537]: E0306 01:48:29.640426 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:29.648219 containerd[1454]: time="2026-03-06T01:48:29.648187852Z" level=info msg="CreateContainer within sandbox \"0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:48:29.658516 systemd-networkd[1374]: cali90bf64f5d9f: Gained IPv6LL Mar 6 01:48:29.677396 systemd[1]: Started cri-containerd-6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8.scope - libcontainer container 6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8. Mar 6 01:48:29.682559 containerd[1454]: time="2026-03-06T01:48:29.682419592Z" level=info msg="CreateContainer within sandbox \"0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6756c060d3f5c62e88672941e6b46279c63819519438683b7ba7ab1d0982898\"" Mar 6 01:48:29.685402 containerd[1454]: time="2026-03-06T01:48:29.685325957Z" level=info msg="StartContainer for \"b6756c060d3f5c62e88672941e6b46279c63819519438683b7ba7ab1d0982898\"" Mar 6 01:48:29.711657 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:48:29.741315 systemd[1]: Started cri-containerd-b6756c060d3f5c62e88672941e6b46279c63819519438683b7ba7ab1d0982898.scope - libcontainer container b6756c060d3f5c62e88672941e6b46279c63819519438683b7ba7ab1d0982898. Mar 6 01:48:29.752244 containerd[1454]: time="2026-03-06T01:48:29.752087048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-sf89w,Uid:0b8104ea-0f2b-4826-8b83-6a37cdde3bc1,Namespace:calico-system,Attempt:1,} returns sandbox id \"6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8\"" Mar 6 01:48:29.785520 containerd[1454]: time="2026-03-06T01:48:29.785464778Z" level=info msg="StartContainer for \"b6756c060d3f5c62e88672941e6b46279c63819519438683b7ba7ab1d0982898\" returns successfully" Mar 6 01:48:29.786691 systemd-networkd[1374]: calif31c1827652: Gained IPv6LL Mar 6 01:48:30.038416 containerd[1454]: time="2026-03-06T01:48:30.038202593Z" level=info msg="StopPodSandbox for \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\"" Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.106 [INFO][5107] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.106 [INFO][5107] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" iface="eth0" netns="/var/run/netns/cni-da9b411e-2caf-3dad-cdcd-75a252132dbb" Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.106 [INFO][5107] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" iface="eth0" netns="/var/run/netns/cni-da9b411e-2caf-3dad-cdcd-75a252132dbb" Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.107 [INFO][5107] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" iface="eth0" netns="/var/run/netns/cni-da9b411e-2caf-3dad-cdcd-75a252132dbb" Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.107 [INFO][5107] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.107 [INFO][5107] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.144 [INFO][5116] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" HandleID="k8s-pod-network.155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.144 [INFO][5116] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.144 [INFO][5116] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.152 [WARNING][5116] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" HandleID="k8s-pod-network.155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.152 [INFO][5116] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" HandleID="k8s-pod-network.155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.154 [INFO][5116] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:30.164879 containerd[1454]: 2026-03-06 01:48:30.162 [INFO][5107] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:30.166222 containerd[1454]: time="2026-03-06T01:48:30.165948609Z" level=info msg="TearDown network for sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\" successfully" Mar 6 01:48:30.166222 containerd[1454]: time="2026-03-06T01:48:30.165981741Z" level=info msg="StopPodSandbox for \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\" returns successfully" Mar 6 01:48:30.168357 systemd[1]: run-netns-cni\x2dda9b411e\x2d2caf\x2d3dad\x2dcdcd\x2d75a252132dbb.mount: Deactivated successfully. Mar 6 01:48:30.171334 containerd[1454]: time="2026-03-06T01:48:30.171282572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f669bb65f-fqbtc,Uid:17d574a6-9dbc-4f4c-a51a-e2c93d76716b,Namespace:calico-system,Attempt:1,}" Mar 6 01:48:30.342239 containerd[1454]: time="2026-03-06T01:48:30.341913329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:30.343949 containerd[1454]: time="2026-03-06T01:48:30.343232601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 6 01:48:30.344835 containerd[1454]: time="2026-03-06T01:48:30.344745505Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:30.346282 kubelet[2537]: E0306 01:48:30.346217 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:30.346863 kubelet[2537]: E0306 01:48:30.346778 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:30.353695 containerd[1454]: time="2026-03-06T01:48:30.352403945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:30.353695 containerd[1454]: time="2026-03-06T01:48:30.353187976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.913735391s" Mar 6 01:48:30.353695 containerd[1454]: time="2026-03-06T01:48:30.353216810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:48:30.359759 containerd[1454]: time="2026-03-06T01:48:30.358355784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 01:48:30.368716 containerd[1454]: time="2026-03-06T01:48:30.368636309Z" level=info msg="CreateContainer within sandbox \"0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:48:30.371986 kubelet[2537]: I0306 01:48:30.371854 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6wdml" podStartSLOduration=39.371787518 podStartE2EDuration="39.371787518s" podCreationTimestamp="2026-03-06 01:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:48:30.370019848 +0000 UTC m=+44.479810974" watchObservedRunningTime="2026-03-06 01:48:30.371787518 +0000 UTC m=+44.481578633" Mar 6 01:48:30.412095 containerd[1454]: time="2026-03-06T01:48:30.411981499Z" level=info msg="CreateContainer within sandbox \"0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b9019071fb9bc9d73160e4523ac0b43538effd4d7ac2f154c5a0aee0d7985da3\"" Mar 6 01:48:30.413663 containerd[1454]: time="2026-03-06T01:48:30.413475207Z" level=info msg="StartContainer for \"b9019071fb9bc9d73160e4523ac0b43538effd4d7ac2f154c5a0aee0d7985da3\"" Mar 6 01:48:30.462779 systemd[1]: Started cri-containerd-b9019071fb9bc9d73160e4523ac0b43538effd4d7ac2f154c5a0aee0d7985da3.scope - libcontainer container b9019071fb9bc9d73160e4523ac0b43538effd4d7ac2f154c5a0aee0d7985da3. Mar 6 01:48:30.507751 systemd-networkd[1374]: cali33ee7680f9f: Link UP Mar 6 01:48:30.510723 systemd-networkd[1374]: cali33ee7680f9f: Gained carrier Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.360 [INFO][5124] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0 calico-kube-controllers-5f669bb65f- calico-system 17d574a6-9dbc-4f4c-a51a-e2c93d76716b 1133 0 2026-03-06 01:48:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f669bb65f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f669bb65f-fqbtc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali33ee7680f9f [] [] }} ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Namespace="calico-system" Pod="calico-kube-controllers-5f669bb65f-fqbtc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.360 [INFO][5124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Namespace="calico-system" Pod="calico-kube-controllers-5f669bb65f-fqbtc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.435 [INFO][5142] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" HandleID="k8s-pod-network.7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.450 [INFO][5142] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" HandleID="k8s-pod-network.7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000377ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f669bb65f-fqbtc", "timestamp":"2026-03-06 01:48:30.435924353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a82c0)} Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.450 [INFO][5142] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.450 [INFO][5142] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.450 [INFO][5142] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.454 [INFO][5142] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" host="localhost" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.465 [INFO][5142] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.474 [INFO][5142] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.478 [INFO][5142] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.481 [INFO][5142] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.481 [INFO][5142] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" host="localhost" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.484 [INFO][5142] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7 Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.489 [INFO][5142] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" host="localhost" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.496 [INFO][5142] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" host="localhost" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.496 [INFO][5142] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" host="localhost" Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.496 [INFO][5142] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:30.529914 containerd[1454]: 2026-03-06 01:48:30.496 [INFO][5142] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" HandleID="k8s-pod-network.7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:30.533575 containerd[1454]: 2026-03-06 01:48:30.500 [INFO][5124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Namespace="calico-system" Pod="calico-kube-controllers-5f669bb65f-fqbtc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0", GenerateName:"calico-kube-controllers-5f669bb65f-", Namespace:"calico-system", SelfLink:"", UID:"17d574a6-9dbc-4f4c-a51a-e2c93d76716b", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f669bb65f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f669bb65f-fqbtc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33ee7680f9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:30.533575 containerd[1454]: 2026-03-06 01:48:30.501 [INFO][5124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Namespace="calico-system" Pod="calico-kube-controllers-5f669bb65f-fqbtc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:30.533575 containerd[1454]: 2026-03-06 01:48:30.501 [INFO][5124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33ee7680f9f ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Namespace="calico-system" Pod="calico-kube-controllers-5f669bb65f-fqbtc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:30.533575 containerd[1454]: 2026-03-06 01:48:30.512 [INFO][5124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Namespace="calico-system" Pod="calico-kube-controllers-5f669bb65f-fqbtc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:30.533575 containerd[1454]: 2026-03-06 01:48:30.512 [INFO][5124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Namespace="calico-system" Pod="calico-kube-controllers-5f669bb65f-fqbtc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0", GenerateName:"calico-kube-controllers-5f669bb65f-", Namespace:"calico-system", SelfLink:"", UID:"17d574a6-9dbc-4f4c-a51a-e2c93d76716b", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f669bb65f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7", Pod:"calico-kube-controllers-5f669bb65f-fqbtc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33ee7680f9f", MAC:"a6:b7:e4:19:c7:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:30.533575 containerd[1454]: 2026-03-06 01:48:30.524 [INFO][5124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7" Namespace="calico-system" Pod="calico-kube-controllers-5f669bb65f-fqbtc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:30.541388 containerd[1454]: time="2026-03-06T01:48:30.541313367Z" level=info msg="StartContainer for \"b9019071fb9bc9d73160e4523ac0b43538effd4d7ac2f154c5a0aee0d7985da3\" returns successfully" Mar 6 01:48:30.578317 containerd[1454]: time="2026-03-06T01:48:30.577270432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:48:30.578317 containerd[1454]: time="2026-03-06T01:48:30.577400966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:48:30.578317 containerd[1454]: time="2026-03-06T01:48:30.577415984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:30.578317 containerd[1454]: time="2026-03-06T01:48:30.577510841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:30.612384 systemd[1]: Started cri-containerd-7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7.scope - libcontainer container 7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7. Mar 6 01:48:30.619546 systemd-networkd[1374]: cali93c103cae43: Gained IPv6LL Mar 6 01:48:30.641038 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:48:30.692082 containerd[1454]: time="2026-03-06T01:48:30.691971538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f669bb65f-fqbtc,Uid:17d574a6-9dbc-4f4c-a51a-e2c93d76716b,Namespace:calico-system,Attempt:1,} returns sandbox id \"7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7\"" Mar 6 01:48:30.810566 systemd-networkd[1374]: cali9dfba2eaa4b: Gained IPv6LL Mar 6 01:48:31.360964 kubelet[2537]: E0306 01:48:31.360874 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:31.367015 kubelet[2537]: E0306 01:48:31.361877 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:31.899640 systemd-networkd[1374]: cali33ee7680f9f: Gained IPv6LL Mar 6 01:48:31.907303 kubelet[2537]: I0306 01:48:31.905748 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-849779dc44-jsqkt" podStartSLOduration=28.988848254 podStartE2EDuration="30.905736017s" podCreationTimestamp="2026-03-06 01:48:01 +0000 UTC" firstStartedPulling="2026-03-06 01:48:28.438215567 +0000 UTC m=+42.548006683" lastFinishedPulling="2026-03-06 01:48:30.355103331 +0000 UTC m=+44.464894446" observedRunningTime="2026-03-06 01:48:31.381010033 +0000 UTC m=+45.490801150" watchObservedRunningTime="2026-03-06 01:48:31.905736017 +0000 UTC m=+46.015527143" Mar 6 01:48:32.037068 containerd[1454]: time="2026-03-06T01:48:32.036798444Z" level=info msg="StopPodSandbox for \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\"" Mar 6 01:48:32.129022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999478890.mount: Deactivated successfully. Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.115 [INFO][5297] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.115 [INFO][5297] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" iface="eth0" netns="/var/run/netns/cni-297e9d99-278e-f761-56bf-c05caab02656" Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.116 [INFO][5297] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" iface="eth0" netns="/var/run/netns/cni-297e9d99-278e-f761-56bf-c05caab02656" Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.119 [INFO][5297] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" iface="eth0" netns="/var/run/netns/cni-297e9d99-278e-f761-56bf-c05caab02656" Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.119 [INFO][5297] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.119 [INFO][5297] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.161 [INFO][5307] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" HandleID="k8s-pod-network.56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.161 [INFO][5307] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.161 [INFO][5307] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.168 [WARNING][5307] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" HandleID="k8s-pod-network.56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.168 [INFO][5307] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" HandleID="k8s-pod-network.56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.171 [INFO][5307] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:32.185263 containerd[1454]: 2026-03-06 01:48:32.177 [INFO][5297] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:32.185263 containerd[1454]: time="2026-03-06T01:48:32.183345233Z" level=info msg="TearDown network for sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\" successfully" Mar 6 01:48:32.185263 containerd[1454]: time="2026-03-06T01:48:32.183417297Z" level=info msg="StopPodSandbox for \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\" returns successfully" Mar 6 01:48:32.188113 systemd[1]: run-netns-cni\x2d297e9d99\x2d278e\x2df761\x2d56bf\x2dc05caab02656.mount: Deactivated successfully. Mar 6 01:48:32.191285 containerd[1454]: time="2026-03-06T01:48:32.191237162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849779dc44-7w6q8,Uid:e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1,Namespace:calico-system,Attempt:1,}" Mar 6 01:48:32.349416 systemd-networkd[1374]: cali0b12d27dcc3: Link UP Mar 6 01:48:32.351566 systemd-networkd[1374]: cali0b12d27dcc3: Gained carrier Mar 6 01:48:32.366990 kubelet[2537]: E0306 01:48:32.364248 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.254 [INFO][5318] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0 calico-apiserver-849779dc44- calico-system e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1 1168 0 2026-03-06 01:48:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849779dc44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-849779dc44-7w6q8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0b12d27dcc3 [] [] }} ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Namespace="calico-system" Pod="calico-apiserver-849779dc44-7w6q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--7w6q8-" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.254 [INFO][5318] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Namespace="calico-system" Pod="calico-apiserver-849779dc44-7w6q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.292 [INFO][5333] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" HandleID="k8s-pod-network.c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.299 [INFO][5333] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" HandleID="k8s-pod-network.c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ee8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-849779dc44-7w6q8", "timestamp":"2026-03-06 01:48:32.292967566 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002d0000)} Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.299 [INFO][5333] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.299 [INFO][5333] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.300 [INFO][5333] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.302 [INFO][5333] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" host="localhost" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.308 [INFO][5333] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.315 [INFO][5333] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.317 [INFO][5333] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.320 [INFO][5333] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.321 [INFO][5333] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" host="localhost" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.324 [INFO][5333] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.330 [INFO][5333] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" host="localhost" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.338 [INFO][5333] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" host="localhost" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.339 [INFO][5333] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" host="localhost" Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.339 [INFO][5333] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:32.390750 containerd[1454]: 2026-03-06 01:48:32.339 [INFO][5333] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" HandleID="k8s-pod-network.c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:32.391429 containerd[1454]: 2026-03-06 01:48:32.343 [INFO][5318] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Namespace="calico-system" Pod="calico-apiserver-849779dc44-7w6q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0", GenerateName:"calico-apiserver-849779dc44-", Namespace:"calico-system", SelfLink:"", UID:"e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849779dc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-849779dc44-7w6q8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0b12d27dcc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:32.391429 containerd[1454]: 2026-03-06 01:48:32.343 [INFO][5318] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Namespace="calico-system" Pod="calico-apiserver-849779dc44-7w6q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:32.391429 containerd[1454]: 2026-03-06 01:48:32.343 [INFO][5318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b12d27dcc3 ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Namespace="calico-system" Pod="calico-apiserver-849779dc44-7w6q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:32.391429 containerd[1454]: 2026-03-06 01:48:32.353 [INFO][5318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Namespace="calico-system" Pod="calico-apiserver-849779dc44-7w6q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:32.391429 containerd[1454]: 2026-03-06 01:48:32.354 [INFO][5318] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Namespace="calico-system" Pod="calico-apiserver-849779dc44-7w6q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0", GenerateName:"calico-apiserver-849779dc44-", Namespace:"calico-system", SelfLink:"", UID:"e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849779dc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b", Pod:"calico-apiserver-849779dc44-7w6q8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0b12d27dcc3", MAC:"7a:14:62:16:7d:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:32.391429 containerd[1454]: 2026-03-06 01:48:32.373 [INFO][5318] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b" Namespace="calico-system" Pod="calico-apiserver-849779dc44-7w6q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:32.489298 containerd[1454]: time="2026-03-06T01:48:32.486567082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:48:32.489298 containerd[1454]: time="2026-03-06T01:48:32.489083928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:48:32.489298 containerd[1454]: time="2026-03-06T01:48:32.489098736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:32.489298 containerd[1454]: time="2026-03-06T01:48:32.489249667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:48:32.569373 systemd[1]: Started cri-containerd-c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b.scope - libcontainer container c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b. Mar 6 01:48:32.628591 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:48:32.683359 containerd[1454]: time="2026-03-06T01:48:32.683317408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849779dc44-7w6q8,Uid:e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1,Namespace:calico-system,Attempt:1,} returns sandbox id \"c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b\"" Mar 6 01:48:32.731474 containerd[1454]: time="2026-03-06T01:48:32.731391453Z" level=info msg="CreateContainer within sandbox \"c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:48:32.762878 containerd[1454]: time="2026-03-06T01:48:32.762729216Z" level=info msg="CreateContainer within sandbox \"c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"25d8a6c5c5277d56be0530537fcff5da1fd3b54e9a2bc220f2e85b774c64d210\"" Mar 6 01:48:32.764011 containerd[1454]: time="2026-03-06T01:48:32.763969400Z" level=info msg="StartContainer for \"25d8a6c5c5277d56be0530537fcff5da1fd3b54e9a2bc220f2e85b774c64d210\"" Mar 6 01:48:32.806678 systemd[1]: Started cri-containerd-25d8a6c5c5277d56be0530537fcff5da1fd3b54e9a2bc220f2e85b774c64d210.scope - libcontainer container 25d8a6c5c5277d56be0530537fcff5da1fd3b54e9a2bc220f2e85b774c64d210. Mar 6 01:48:32.825514 containerd[1454]: time="2026-03-06T01:48:32.825284708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:32.827029 containerd[1454]: time="2026-03-06T01:48:32.826684765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 6 01:48:32.828517 containerd[1454]: time="2026-03-06T01:48:32.828250009Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:32.831665 containerd[1454]: time="2026-03-06T01:48:32.831638078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:32.832729 containerd[1454]: time="2026-03-06T01:48:32.832661963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.474123117s" Mar 6 01:48:32.832729 containerd[1454]: time="2026-03-06T01:48:32.832724169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 6 01:48:32.835965 containerd[1454]: time="2026-03-06T01:48:32.835835468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 01:48:32.840418 containerd[1454]: time="2026-03-06T01:48:32.840389678Z" level=info msg="CreateContainer within sandbox \"6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 01:48:32.862741 containerd[1454]: time="2026-03-06T01:48:32.862606999Z" level=info msg="CreateContainer within sandbox \"6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"729cdce8ab02c053fb020008e0959a3585dc275baaaa147cf0c8c6fbfebbc839\"" Mar 6 01:48:32.865194 containerd[1454]: time="2026-03-06T01:48:32.865102003Z" level=info msg="StartContainer for \"729cdce8ab02c053fb020008e0959a3585dc275baaaa147cf0c8c6fbfebbc839\"" Mar 6 01:48:32.870799 containerd[1454]: time="2026-03-06T01:48:32.870330034Z" level=info msg="StartContainer for \"25d8a6c5c5277d56be0530537fcff5da1fd3b54e9a2bc220f2e85b774c64d210\" returns successfully" Mar 6 01:48:32.924667 systemd[1]: Started cri-containerd-729cdce8ab02c053fb020008e0959a3585dc275baaaa147cf0c8c6fbfebbc839.scope - libcontainer container 729cdce8ab02c053fb020008e0959a3585dc275baaaa147cf0c8c6fbfebbc839. Mar 6 01:48:32.973029 systemd[1]: Started sshd@11-10.0.0.156:22-10.0.0.1:45102.service - OpenSSH per-connection server daemon (10.0.0.1:45102). Mar 6 01:48:33.028611 containerd[1454]: time="2026-03-06T01:48:33.027273487Z" level=info msg="StartContainer for \"729cdce8ab02c053fb020008e0959a3585dc275baaaa147cf0c8c6fbfebbc839\" returns successfully" Mar 6 01:48:33.032286 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 45102 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:33.034894 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:33.043283 systemd-logind[1442]: New session 12 of user core. Mar 6 01:48:33.047620 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 01:48:33.370306 sshd[5478]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:33.377035 systemd[1]: sshd@11-10.0.0.156:22-10.0.0.1:45102.service: Deactivated successfully. Mar 6 01:48:33.381495 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 01:48:33.383749 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Mar 6 01:48:33.390058 systemd-logind[1442]: Removed session 12. Mar 6 01:48:33.393534 kubelet[2537]: I0306 01:48:33.393311 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-sf89w" podStartSLOduration=29.312859903 podStartE2EDuration="32.39330001s" podCreationTimestamp="2026-03-06 01:48:01 +0000 UTC" firstStartedPulling="2026-03-06 01:48:29.754743681 +0000 UTC m=+43.864534797" lastFinishedPulling="2026-03-06 01:48:32.835183789 +0000 UTC m=+46.944974904" observedRunningTime="2026-03-06 01:48:33.392576119 +0000 UTC m=+47.502367235" watchObservedRunningTime="2026-03-06 01:48:33.39330001 +0000 UTC m=+47.503091126" Mar 6 01:48:33.407472 kubelet[2537]: I0306 01:48:33.407394 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-849779dc44-7w6q8" podStartSLOduration=32.407379947 podStartE2EDuration="32.407379947s" podCreationTimestamp="2026-03-06 01:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:48:33.405750085 +0000 UTC m=+47.515541231" watchObservedRunningTime="2026-03-06 01:48:33.407379947 +0000 UTC m=+47.517171063" Mar 6 01:48:33.498400 systemd-networkd[1374]: cali0b12d27dcc3: Gained IPv6LL Mar 6 01:48:34.383328 kubelet[2537]: I0306 01:48:34.382589 2537 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:48:34.943957 containerd[1454]: time="2026-03-06T01:48:34.943890503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:34.944858 containerd[1454]: time="2026-03-06T01:48:34.944780665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 6 01:48:34.946892 containerd[1454]: time="2026-03-06T01:48:34.946806417Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:34.950343 containerd[1454]: time="2026-03-06T01:48:34.950280798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.114414322s" Mar 6 01:48:34.950343 containerd[1454]: time="2026-03-06T01:48:34.950331463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 6 01:48:34.968288 containerd[1454]: time="2026-03-06T01:48:34.968240063Z" level=info msg="CreateContainer within sandbox \"7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 01:48:34.979969 containerd[1454]: time="2026-03-06T01:48:34.979787279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:48:34.984172 containerd[1454]: time="2026-03-06T01:48:34.984070476Z" level=info msg="CreateContainer within sandbox \"7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1f8c9042dd129f52b66d07ae94672b5dcce90c447ea7b36198211ba337db3471\"" Mar 6 01:48:34.984760 containerd[1454]: time="2026-03-06T01:48:34.984725068Z" level=info msg="StartContainer for \"1f8c9042dd129f52b66d07ae94672b5dcce90c447ea7b36198211ba337db3471\"" Mar 6 01:48:35.024354 systemd[1]: Started cri-containerd-1f8c9042dd129f52b66d07ae94672b5dcce90c447ea7b36198211ba337db3471.scope - libcontainer container 1f8c9042dd129f52b66d07ae94672b5dcce90c447ea7b36198211ba337db3471. Mar 6 01:48:35.075188 containerd[1454]: time="2026-03-06T01:48:35.074574264Z" level=info msg="StartContainer for \"1f8c9042dd129f52b66d07ae94672b5dcce90c447ea7b36198211ba337db3471\" returns successfully" Mar 6 01:48:36.467495 kubelet[2537]: I0306 01:48:36.467393 2537 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f669bb65f-fqbtc" podStartSLOduration=30.210590419 podStartE2EDuration="34.467378281s" podCreationTimestamp="2026-03-06 01:48:02 +0000 UTC" firstStartedPulling="2026-03-06 01:48:30.694599444 +0000 UTC m=+44.804390560" lastFinishedPulling="2026-03-06 01:48:34.951387306 +0000 UTC m=+49.061178422" observedRunningTime="2026-03-06 01:48:35.404589816 +0000 UTC m=+49.514380931" watchObservedRunningTime="2026-03-06 01:48:36.467378281 +0000 UTC m=+50.577169397" Mar 6 01:48:38.382074 systemd[1]: Started sshd@12-10.0.0.156:22-10.0.0.1:45114.service - OpenSSH per-connection server daemon (10.0.0.1:45114). Mar 6 01:48:38.447704 sshd[5667]: Accepted publickey for core from 10.0.0.1 port 45114 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:38.449964 sshd[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:38.457458 systemd-logind[1442]: New session 13 of user core. Mar 6 01:48:38.471380 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 01:48:38.704372 sshd[5667]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:38.709216 systemd[1]: sshd@12-10.0.0.156:22-10.0.0.1:45114.service: Deactivated successfully. Mar 6 01:48:38.711780 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 01:48:38.712861 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Mar 6 01:48:38.714416 systemd-logind[1442]: Removed session 13. Mar 6 01:48:43.726945 systemd[1]: Started sshd@13-10.0.0.156:22-10.0.0.1:59708.service - OpenSSH per-connection server daemon (10.0.0.1:59708). Mar 6 01:48:43.773406 sshd[5714]: Accepted publickey for core from 10.0.0.1 port 59708 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:43.775035 sshd[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:43.779912 systemd-logind[1442]: New session 14 of user core. Mar 6 01:48:43.788329 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 01:48:43.933635 sshd[5714]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:43.945793 systemd[1]: sshd@13-10.0.0.156:22-10.0.0.1:59708.service: Deactivated successfully. Mar 6 01:48:43.949772 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 01:48:43.952798 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Mar 6 01:48:43.964544 systemd[1]: Started sshd@14-10.0.0.156:22-10.0.0.1:59710.service - OpenSSH per-connection server daemon (10.0.0.1:59710). Mar 6 01:48:43.966642 systemd-logind[1442]: Removed session 14. Mar 6 01:48:43.993986 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 59710 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:43.995768 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:44.001567 systemd-logind[1442]: New session 15 of user core. Mar 6 01:48:44.010337 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 01:48:44.217731 sshd[5729]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:44.227614 systemd[1]: sshd@14-10.0.0.156:22-10.0.0.1:59710.service: Deactivated successfully. Mar 6 01:48:44.230012 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 01:48:44.232415 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Mar 6 01:48:44.239524 systemd[1]: Started sshd@15-10.0.0.156:22-10.0.0.1:59724.service - OpenSSH per-connection server daemon (10.0.0.1:59724). Mar 6 01:48:44.242559 systemd-logind[1442]: Removed session 15. Mar 6 01:48:44.291418 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 59724 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:44.293335 sshd[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:44.298707 systemd-logind[1442]: New session 16 of user core. Mar 6 01:48:44.305304 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 01:48:44.459448 sshd[5742]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:44.463985 systemd[1]: sshd@15-10.0.0.156:22-10.0.0.1:59724.service: Deactivated successfully. Mar 6 01:48:44.466643 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 01:48:44.468612 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Mar 6 01:48:44.470176 systemd-logind[1442]: Removed session 16. Mar 6 01:48:46.021482 containerd[1454]: time="2026-03-06T01:48:46.020321636Z" level=info msg="StopPodSandbox for \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\"" Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.084 [WARNING][5765] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0", GenerateName:"calico-apiserver-849779dc44-", Namespace:"calico-system", SelfLink:"", UID:"e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849779dc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b", Pod:"calico-apiserver-849779dc44-7w6q8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0b12d27dcc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.085 [INFO][5765] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.085 [INFO][5765] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" iface="eth0" netns="" Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.085 [INFO][5765] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.085 [INFO][5765] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.141 [INFO][5776] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" HandleID="k8s-pod-network.56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.142 [INFO][5776] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.142 [INFO][5776] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.153 [WARNING][5776] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" HandleID="k8s-pod-network.56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.153 [INFO][5776] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" HandleID="k8s-pod-network.56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.155 [INFO][5776] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:46.162281 containerd[1454]: 2026-03-06 01:48:46.158 [INFO][5765] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:46.162281 containerd[1454]: time="2026-03-06T01:48:46.162228462Z" level=info msg="TearDown network for sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\" successfully" Mar 6 01:48:46.162281 containerd[1454]: time="2026-03-06T01:48:46.162264490Z" level=info msg="StopPodSandbox for \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\" returns successfully" Mar 6 01:48:46.193446 containerd[1454]: time="2026-03-06T01:48:46.193360531Z" level=info msg="RemovePodSandbox for \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\"" Mar 6 01:48:46.195552 containerd[1454]: time="2026-03-06T01:48:46.195482153Z" level=info msg="Forcibly stopping sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\"" Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.240 [WARNING][5795] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0", GenerateName:"calico-apiserver-849779dc44-", Namespace:"calico-system", SelfLink:"", UID:"e62d0a0a-7e1b-4607-a9ed-d2e56e6ebec1", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849779dc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6daf7a9c581b9b49a6f5145d94f7b80bf31468ffc1c452b5dd6d0620da5fa3b", Pod:"calico-apiserver-849779dc44-7w6q8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0b12d27dcc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.240 [INFO][5795] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.240 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" iface="eth0" netns="" Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.240 [INFO][5795] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.240 [INFO][5795] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.280 [INFO][5803] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" HandleID="k8s-pod-network.56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.281 [INFO][5803] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.281 [INFO][5803] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.287 [WARNING][5803] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" HandleID="k8s-pod-network.56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.287 [INFO][5803] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" HandleID="k8s-pod-network.56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Workload="localhost-k8s-calico--apiserver--849779dc44--7w6q8-eth0" Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.289 [INFO][5803] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:46.297400 containerd[1454]: 2026-03-06 01:48:46.293 [INFO][5795] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133" Mar 6 01:48:46.297400 containerd[1454]: time="2026-03-06T01:48:46.297331616Z" level=info msg="TearDown network for sandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\" successfully" Mar 6 01:48:46.320013 containerd[1454]: time="2026-03-06T01:48:46.319945469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:48:46.320097 containerd[1454]: time="2026-03-06T01:48:46.320053680Z" level=info msg="RemovePodSandbox \"56b51bf3bd5ddfd74e901f844558105b4b907d465fb3f4925dafe68b030ed133\" returns successfully" Mar 6 01:48:46.327252 containerd[1454]: time="2026-03-06T01:48:46.327213111Z" level=info msg="StopPodSandbox for \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\"" Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.382 [WARNING][5821] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" WorkloadEndpoint="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.382 [INFO][5821] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.382 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" iface="eth0" netns="" Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.382 [INFO][5821] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.383 [INFO][5821] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.425 [INFO][5830] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" HandleID="k8s-pod-network.ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Workload="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.426 [INFO][5830] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.426 [INFO][5830] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.435 [WARNING][5830] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" HandleID="k8s-pod-network.ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Workload="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.435 [INFO][5830] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" HandleID="k8s-pod-network.ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Workload="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.437 [INFO][5830] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:46.445802 containerd[1454]: 2026-03-06 01:48:46.441 [INFO][5821] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:46.446293 containerd[1454]: time="2026-03-06T01:48:46.445876427Z" level=info msg="TearDown network for sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\" successfully" Mar 6 01:48:46.446293 containerd[1454]: time="2026-03-06T01:48:46.445912545Z" level=info msg="StopPodSandbox for \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\" returns successfully" Mar 6 01:48:46.446449 containerd[1454]: time="2026-03-06T01:48:46.446415453Z" level=info msg="RemovePodSandbox for \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\"" Mar 6 01:48:46.446581 containerd[1454]: time="2026-03-06T01:48:46.446540407Z" level=info msg="Forcibly stopping sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\"" Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.498 [WARNING][5848] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" WorkloadEndpoint="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.499 [INFO][5848] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.499 [INFO][5848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" iface="eth0" netns="" Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.499 [INFO][5848] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.499 [INFO][5848] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.531 [INFO][5856] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" HandleID="k8s-pod-network.ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Workload="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.532 [INFO][5856] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.532 [INFO][5856] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.538 [WARNING][5856] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" HandleID="k8s-pod-network.ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Workload="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.538 [INFO][5856] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" HandleID="k8s-pod-network.ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Workload="localhost-k8s-whisker--698654fb6d--bqsjg-eth0" Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.540 [INFO][5856] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:46.546271 containerd[1454]: 2026-03-06 01:48:46.543 [INFO][5848] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e" Mar 6 01:48:46.546970 containerd[1454]: time="2026-03-06T01:48:46.546329842Z" level=info msg="TearDown network for sandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\" successfully" Mar 6 01:48:46.551079 containerd[1454]: time="2026-03-06T01:48:46.550899293Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:48:46.551079 containerd[1454]: time="2026-03-06T01:48:46.550999290Z" level=info msg="RemovePodSandbox \"ce0198f2b1dca3f152582a04953200877cb2199464146be9039fb3efafd8522e\" returns successfully" Mar 6 01:48:46.551931 containerd[1454]: time="2026-03-06T01:48:46.551641800Z" level=info msg="StopPodSandbox for \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\"" Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.602 [WARNING][5874] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0", GenerateName:"calico-kube-controllers-5f669bb65f-", Namespace:"calico-system", SelfLink:"", UID:"17d574a6-9dbc-4f4c-a51a-e2c93d76716b", ResourceVersion:"1220", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f669bb65f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7", Pod:"calico-kube-controllers-5f669bb65f-fqbtc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33ee7680f9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.602 [INFO][5874] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.602 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" iface="eth0" netns="" Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.602 [INFO][5874] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.602 [INFO][5874] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.632 [INFO][5883] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" HandleID="k8s-pod-network.155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.633 [INFO][5883] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.633 [INFO][5883] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.640 [WARNING][5883] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" HandleID="k8s-pod-network.155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.640 [INFO][5883] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" HandleID="k8s-pod-network.155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.642 [INFO][5883] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:46.648226 containerd[1454]: 2026-03-06 01:48:46.645 [INFO][5874] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:46.648226 containerd[1454]: time="2026-03-06T01:48:46.648061961Z" level=info msg="TearDown network for sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\" successfully" Mar 6 01:48:46.648226 containerd[1454]: time="2026-03-06T01:48:46.648087189Z" level=info msg="StopPodSandbox for \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\" returns successfully" Mar 6 01:48:46.648948 containerd[1454]: time="2026-03-06T01:48:46.648738420Z" level=info msg="RemovePodSandbox for \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\"" Mar 6 01:48:46.648948 containerd[1454]: time="2026-03-06T01:48:46.648808612Z" level=info msg="Forcibly stopping sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\"" Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.700 [WARNING][5901] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0", GenerateName:"calico-kube-controllers-5f669bb65f-", Namespace:"calico-system", SelfLink:"", UID:"17d574a6-9dbc-4f4c-a51a-e2c93d76716b", ResourceVersion:"1220", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f669bb65f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a0d39fec100521bfeb0310f6a1de4d83d6d368e70b2306db125d99c13773df7", Pod:"calico-kube-controllers-5f669bb65f-fqbtc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33ee7680f9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.700 [INFO][5901] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.700 [INFO][5901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" iface="eth0" netns="" Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.700 [INFO][5901] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.700 [INFO][5901] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.725 [INFO][5911] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" HandleID="k8s-pod-network.155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.726 [INFO][5911] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.726 [INFO][5911] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.732 [WARNING][5911] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" HandleID="k8s-pod-network.155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.732 [INFO][5911] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" HandleID="k8s-pod-network.155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Workload="localhost-k8s-calico--kube--controllers--5f669bb65f--fqbtc-eth0" Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.734 [INFO][5911] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:46.740622 containerd[1454]: 2026-03-06 01:48:46.736 [INFO][5901] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c" Mar 6 01:48:46.740622 containerd[1454]: time="2026-03-06T01:48:46.740575762Z" level=info msg="TearDown network for sandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\" successfully" Mar 6 01:48:46.746353 containerd[1454]: time="2026-03-06T01:48:46.746286625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:48:46.746452 containerd[1454]: time="2026-03-06T01:48:46.746394336Z" level=info msg="RemovePodSandbox \"155993370047aae9f56ad7f3a96e0dd1af61ef9ad07286b7e62b0effb4dba23c\" returns successfully" Mar 6 01:48:46.747099 containerd[1454]: time="2026-03-06T01:48:46.747001680Z" level=info msg="StopPodSandbox for \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\"" Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.797 [WARNING][5930] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--sf89w-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"0b8104ea-0f2b-4826-8b83-6a37cdde3bc1", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8", Pod:"goldmane-9f7667bb8-sf89w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali93c103cae43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.798 [INFO][5930] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.798 [INFO][5930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" iface="eth0" netns="" Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.798 [INFO][5930] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.798 [INFO][5930] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.831 [INFO][5938] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" HandleID="k8s-pod-network.e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.831 [INFO][5938] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.831 [INFO][5938] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.837 [WARNING][5938] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" HandleID="k8s-pod-network.e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.837 [INFO][5938] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" HandleID="k8s-pod-network.e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.838 [INFO][5938] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:46.844391 containerd[1454]: 2026-03-06 01:48:46.841 [INFO][5930] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:46.844391 containerd[1454]: time="2026-03-06T01:48:46.844357912Z" level=info msg="TearDown network for sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\" successfully" Mar 6 01:48:46.845622 containerd[1454]: time="2026-03-06T01:48:46.844394841Z" level=info msg="StopPodSandbox for \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\" returns successfully" Mar 6 01:48:46.845622 containerd[1454]: time="2026-03-06T01:48:46.845074197Z" level=info msg="RemovePodSandbox for \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\"" Mar 6 01:48:46.845622 containerd[1454]: time="2026-03-06T01:48:46.845109453Z" level=info msg="Forcibly stopping sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\"" Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.894 [WARNING][5957] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--sf89w-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"0b8104ea-0f2b-4826-8b83-6a37cdde3bc1", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6eb59e7c06ed94ac64f2789c3c5b0953f717b9d8f3759731a9497a5c99ed2ef8", Pod:"goldmane-9f7667bb8-sf89w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali93c103cae43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.895 [INFO][5957] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.895 [INFO][5957] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" iface="eth0" netns="" Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.895 [INFO][5957] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.895 [INFO][5957] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.927 [INFO][5965] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" HandleID="k8s-pod-network.e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.927 [INFO][5965] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.927 [INFO][5965] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.933 [WARNING][5965] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" HandleID="k8s-pod-network.e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.934 [INFO][5965] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" HandleID="k8s-pod-network.e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Workload="localhost-k8s-goldmane--9f7667bb8--sf89w-eth0" Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.936 [INFO][5965] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:46.941743 containerd[1454]: 2026-03-06 01:48:46.938 [INFO][5957] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a" Mar 6 01:48:46.942218 containerd[1454]: time="2026-03-06T01:48:46.941776500Z" level=info msg="TearDown network for sandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\" successfully" Mar 6 01:48:46.952046 containerd[1454]: time="2026-03-06T01:48:46.951972117Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:48:46.952184 containerd[1454]: time="2026-03-06T01:48:46.952069488Z" level=info msg="RemovePodSandbox \"e0c871700efe80e1b8dd7ebc4fe7cc8c3b739f641085dfd9bb30c42b3adefc2a\" returns successfully" Mar 6 01:48:46.952879 containerd[1454]: time="2026-03-06T01:48:46.952791878Z" level=info msg="StopPodSandbox for \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\"" Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.003 [WARNING][5982] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--9lrv6-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2d6af042-9f63-4188-b4d2-c221e72cdd50", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4", Pod:"coredns-7d764666f9-9lrv6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif31c1827652", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.003 [INFO][5982] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.003 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" iface="eth0" netns="" Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.003 [INFO][5982] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.003 [INFO][5982] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.037 [INFO][5990] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" HandleID="k8s-pod-network.eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.037 [INFO][5990] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.037 [INFO][5990] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.045 [WARNING][5990] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" HandleID="k8s-pod-network.eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.045 [INFO][5990] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" HandleID="k8s-pod-network.eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.049 [INFO][5990] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:47.057939 containerd[1454]: 2026-03-06 01:48:47.052 [INFO][5982] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:47.058906 containerd[1454]: time="2026-03-06T01:48:47.057996254Z" level=info msg="TearDown network for sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\" successfully" Mar 6 01:48:47.058906 containerd[1454]: time="2026-03-06T01:48:47.058027612Z" level=info msg="StopPodSandbox for \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\" returns successfully" Mar 6 01:48:47.058906 containerd[1454]: time="2026-03-06T01:48:47.058740879Z" level=info msg="RemovePodSandbox for \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\"" Mar 6 01:48:47.058906 containerd[1454]: time="2026-03-06T01:48:47.058767789Z" level=info msg="Forcibly stopping sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\"" Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.104 [WARNING][6009] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--9lrv6-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2d6af042-9f63-4188-b4d2-c221e72cdd50", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6e0221bef68c7b2b7719f9ee980e0b00abc93ec55ef7decd273a05e8b82b8b4", Pod:"coredns-7d764666f9-9lrv6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif31c1827652", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.105 [INFO][6009] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.105 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" iface="eth0" netns="" Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.105 [INFO][6009] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.105 [INFO][6009] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.134 [INFO][6018] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" HandleID="k8s-pod-network.eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.134 [INFO][6018] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.134 [INFO][6018] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.140 [WARNING][6018] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" HandleID="k8s-pod-network.eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.140 [INFO][6018] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" HandleID="k8s-pod-network.eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Workload="localhost-k8s-coredns--7d764666f9--9lrv6-eth0" Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.142 [INFO][6018] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:47.149976 containerd[1454]: 2026-03-06 01:48:47.145 [INFO][6009] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b" Mar 6 01:48:47.149976 containerd[1454]: time="2026-03-06T01:48:47.149265531Z" level=info msg="TearDown network for sandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\" successfully" Mar 6 01:48:47.158771 containerd[1454]: time="2026-03-06T01:48:47.158734321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:48:47.158942 containerd[1454]: time="2026-03-06T01:48:47.158812377Z" level=info msg="RemovePodSandbox \"eb69cb86362943c95308b66bd8a611e3e2a58f8f19df3f42a80a90d7361e163b\" returns successfully" Mar 6 01:48:47.159639 containerd[1454]: time="2026-03-06T01:48:47.159586001Z" level=info msg="StopPodSandbox for \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\"" Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.203 [WARNING][6035] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0", GenerateName:"calico-apiserver-849779dc44-", Namespace:"calico-system", SelfLink:"", UID:"cc178117-4377-4398-8b69-5a7eb386dc85", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849779dc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce", Pod:"calico-apiserver-849779dc44-jsqkt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali90bf64f5d9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.203 [INFO][6035] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.203 [INFO][6035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" iface="eth0" netns="" Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.203 [INFO][6035] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.203 [INFO][6035] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.233 [INFO][6043] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" HandleID="k8s-pod-network.afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.233 [INFO][6043] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.233 [INFO][6043] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.241 [WARNING][6043] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" HandleID="k8s-pod-network.afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.241 [INFO][6043] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" HandleID="k8s-pod-network.afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.243 [INFO][6043] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:47.248726 containerd[1454]: 2026-03-06 01:48:47.245 [INFO][6035] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:47.249415 containerd[1454]: time="2026-03-06T01:48:47.248741196Z" level=info msg="TearDown network for sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\" successfully" Mar 6 01:48:47.249415 containerd[1454]: time="2026-03-06T01:48:47.248775079Z" level=info msg="StopPodSandbox for \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\" returns successfully" Mar 6 01:48:47.249732 containerd[1454]: time="2026-03-06T01:48:47.249699145Z" level=info msg="RemovePodSandbox for \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\"" Mar 6 01:48:47.249865 containerd[1454]: time="2026-03-06T01:48:47.249742957Z" level=info msg="Forcibly stopping sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\"" Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.297 [WARNING][6062] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0", GenerateName:"calico-apiserver-849779dc44-", Namespace:"calico-system", SelfLink:"", UID:"cc178117-4377-4398-8b69-5a7eb386dc85", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 48, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849779dc44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0088c73caa76613c34a49e9374943cc288c25b4732964b045b02dd73947db0ce", Pod:"calico-apiserver-849779dc44-jsqkt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali90bf64f5d9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.297 [INFO][6062] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.297 [INFO][6062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" iface="eth0" netns="" Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.297 [INFO][6062] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.297 [INFO][6062] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.331 [INFO][6071] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" HandleID="k8s-pod-network.afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.331 [INFO][6071] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.331 [INFO][6071] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.340 [WARNING][6071] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" HandleID="k8s-pod-network.afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.340 [INFO][6071] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" HandleID="k8s-pod-network.afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Workload="localhost-k8s-calico--apiserver--849779dc44--jsqkt-eth0" Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.341 [INFO][6071] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:47.347362 containerd[1454]: 2026-03-06 01:48:47.344 [INFO][6062] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2" Mar 6 01:48:47.347362 containerd[1454]: time="2026-03-06T01:48:47.347338764Z" level=info msg="TearDown network for sandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\" successfully" Mar 6 01:48:47.352986 containerd[1454]: time="2026-03-06T01:48:47.352902262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:48:47.353046 containerd[1454]: time="2026-03-06T01:48:47.353010724Z" level=info msg="RemovePodSandbox \"afe4123d9f9cf02b0bce524c38ea53a48d43550766affd419d694a2c6fd35bd2\" returns successfully" Mar 6 01:48:47.353662 containerd[1454]: time="2026-03-06T01:48:47.353540093Z" level=info msg="StopPodSandbox for \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\"" Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.396 [WARNING][6089] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--6wdml-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7582639d-989e-494f-9494-c73a5ce2a100", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126", Pod:"coredns-7d764666f9-6wdml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9dfba2eaa4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.396 [INFO][6089] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.396 [INFO][6089] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" iface="eth0" netns="" Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.396 [INFO][6089] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.396 [INFO][6089] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.427 [INFO][6097] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" HandleID="k8s-pod-network.027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.427 [INFO][6097] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.427 [INFO][6097] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.434 [WARNING][6097] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" HandleID="k8s-pod-network.027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.435 [INFO][6097] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" HandleID="k8s-pod-network.027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.436 [INFO][6097] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:47.443890 containerd[1454]: 2026-03-06 01:48:47.441 [INFO][6089] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:47.443890 containerd[1454]: time="2026-03-06T01:48:47.443850088Z" level=info msg="TearDown network for sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\" successfully" Mar 6 01:48:47.443890 containerd[1454]: time="2026-03-06T01:48:47.443883110Z" level=info msg="StopPodSandbox for \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\" returns successfully" Mar 6 01:48:47.444611 containerd[1454]: time="2026-03-06T01:48:47.444521813Z" level=info msg="RemovePodSandbox for \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\"" Mar 6 01:48:47.444611 containerd[1454]: time="2026-03-06T01:48:47.444576225Z" level=info msg="Forcibly stopping sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\"" Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.559 [WARNING][6114] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--6wdml-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7582639d-989e-494f-9494-c73a5ce2a100", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0966e4939ec176292cebf98b83f12b1b280f4163a882af54c6949b722a664126", Pod:"coredns-7d764666f9-6wdml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9dfba2eaa4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.561 [INFO][6114] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.561 [INFO][6114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" iface="eth0" netns="" Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.561 [INFO][6114] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.561 [INFO][6114] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.591 [INFO][6123] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" HandleID="k8s-pod-network.027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.592 [INFO][6123] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.592 [INFO][6123] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.597 [WARNING][6123] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" HandleID="k8s-pod-network.027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.597 [INFO][6123] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" HandleID="k8s-pod-network.027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Workload="localhost-k8s-coredns--7d764666f9--6wdml-eth0" Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.599 [INFO][6123] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:48:47.604812 containerd[1454]: 2026-03-06 01:48:47.602 [INFO][6114] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9" Mar 6 01:48:47.604812 containerd[1454]: time="2026-03-06T01:48:47.604752688Z" level=info msg="TearDown network for sandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\" successfully" Mar 6 01:48:47.609455 containerd[1454]: time="2026-03-06T01:48:47.609357199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:48:47.609455 containerd[1454]: time="2026-03-06T01:48:47.609431458Z" level=info msg="RemovePodSandbox \"027ab5f223c9f9033e2aef6e9da530c3e7ff41fa6fa6bad59777510cd807aab9\" returns successfully" Mar 6 01:48:49.472427 systemd[1]: Started sshd@16-10.0.0.156:22-10.0.0.1:59728.service - OpenSSH per-connection server daemon (10.0.0.1:59728). Mar 6 01:48:49.551952 sshd[6131]: Accepted publickey for core from 10.0.0.1 port 59728 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:49.553879 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:49.560018 systemd-logind[1442]: New session 17 of user core. Mar 6 01:48:49.565340 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 01:48:49.801764 sshd[6131]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:49.806607 systemd[1]: sshd@16-10.0.0.156:22-10.0.0.1:59728.service: Deactivated successfully. Mar 6 01:48:49.808919 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 01:48:49.809800 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Mar 6 01:48:49.811414 systemd-logind[1442]: Removed session 17. Mar 6 01:48:52.329035 kubelet[2537]: I0306 01:48:52.328323 2537 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:48:54.820378 systemd[1]: Started sshd@17-10.0.0.156:22-10.0.0.1:46998.service - OpenSSH per-connection server daemon (10.0.0.1:46998). Mar 6 01:48:54.977932 sshd[6178]: Accepted publickey for core from 10.0.0.1 port 46998 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:54.980306 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:54.986915 systemd-logind[1442]: New session 18 of user core. Mar 6 01:48:54.998409 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 01:48:55.211301 sshd[6178]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:55.220608 systemd[1]: sshd@17-10.0.0.156:22-10.0.0.1:46998.service: Deactivated successfully. Mar 6 01:48:55.224009 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 01:48:55.226928 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Mar 6 01:48:55.232587 systemd[1]: Started sshd@18-10.0.0.156:22-10.0.0.1:47004.service - OpenSSH per-connection server daemon (10.0.0.1:47004). Mar 6 01:48:55.234085 systemd-logind[1442]: Removed session 18. Mar 6 01:48:55.265893 sshd[6192]: Accepted publickey for core from 10.0.0.1 port 47004 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:55.267979 sshd[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:55.273966 systemd-logind[1442]: New session 19 of user core. Mar 6 01:48:55.289357 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 01:48:55.574384 sshd[6192]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:55.584908 systemd[1]: sshd@18-10.0.0.156:22-10.0.0.1:47004.service: Deactivated successfully. Mar 6 01:48:55.587462 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 01:48:55.589471 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Mar 6 01:48:55.595555 systemd[1]: Started sshd@19-10.0.0.156:22-10.0.0.1:47006.service - OpenSSH per-connection server daemon (10.0.0.1:47006). Mar 6 01:48:55.597219 systemd-logind[1442]: Removed session 19. Mar 6 01:48:55.641956 sshd[6205]: Accepted publickey for core from 10.0.0.1 port 47006 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:55.644363 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:55.651501 systemd-logind[1442]: New session 20 of user core. Mar 6 01:48:55.658348 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 01:48:56.246889 sshd[6205]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:56.258774 systemd[1]: sshd@19-10.0.0.156:22-10.0.0.1:47006.service: Deactivated successfully. Mar 6 01:48:56.262886 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 01:48:56.264045 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Mar 6 01:48:56.278732 systemd[1]: Started sshd@20-10.0.0.156:22-10.0.0.1:47012.service - OpenSSH per-connection server daemon (10.0.0.1:47012). Mar 6 01:48:56.281110 systemd-logind[1442]: Removed session 20. Mar 6 01:48:56.314283 sshd[6232]: Accepted publickey for core from 10.0.0.1 port 47012 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:56.316239 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:56.321981 systemd-logind[1442]: New session 21 of user core. Mar 6 01:48:56.330345 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 01:48:56.720195 sshd[6232]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:56.729498 systemd[1]: sshd@20-10.0.0.156:22-10.0.0.1:47012.service: Deactivated successfully. Mar 6 01:48:56.732566 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 01:48:56.736022 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Mar 6 01:48:56.747632 systemd[1]: Started sshd@21-10.0.0.156:22-10.0.0.1:47018.service - OpenSSH per-connection server daemon (10.0.0.1:47018). Mar 6 01:48:56.750211 systemd-logind[1442]: Removed session 21. Mar 6 01:48:56.818532 sshd[6246]: Accepted publickey for core from 10.0.0.1 port 47018 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:48:56.821077 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:48:56.828589 systemd-logind[1442]: New session 22 of user core. Mar 6 01:48:56.838522 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 01:48:57.008498 sshd[6246]: pam_unix(sshd:session): session closed for user core Mar 6 01:48:57.014386 systemd[1]: sshd@21-10.0.0.156:22-10.0.0.1:47018.service: Deactivated successfully. Mar 6 01:48:57.017558 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 01:48:57.018666 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Mar 6 01:48:57.020510 systemd-logind[1442]: Removed session 22. Mar 6 01:49:01.046392 kubelet[2537]: E0306 01:49:01.039694 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:02.036608 systemd[1]: Started sshd@22-10.0.0.156:22-10.0.0.1:47030.service - OpenSSH per-connection server daemon (10.0.0.1:47030). Mar 6 01:49:02.076746 sshd[6272]: Accepted publickey for core from 10.0.0.1 port 47030 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:49:02.080256 sshd[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:02.087666 systemd-logind[1442]: New session 23 of user core. Mar 6 01:49:02.097371 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 01:49:02.255710 sshd[6272]: pam_unix(sshd:session): session closed for user core Mar 6 01:49:02.262461 systemd[1]: sshd@22-10.0.0.156:22-10.0.0.1:47030.service: Deactivated successfully. Mar 6 01:49:02.266725 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 01:49:02.267857 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Mar 6 01:49:02.269775 systemd-logind[1442]: Removed session 23. Mar 6 01:49:07.280532 systemd[1]: Started sshd@23-10.0.0.156:22-10.0.0.1:33592.service - OpenSSH per-connection server daemon (10.0.0.1:33592). Mar 6 01:49:07.331033 sshd[6332]: Accepted publickey for core from 10.0.0.1 port 33592 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:49:07.333431 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:07.338709 systemd-logind[1442]: New session 24 of user core. Mar 6 01:49:07.348291 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 01:49:07.518355 sshd[6332]: pam_unix(sshd:session): session closed for user core Mar 6 01:49:07.522473 systemd[1]: sshd@23-10.0.0.156:22-10.0.0.1:33592.service: Deactivated successfully. Mar 6 01:49:07.524558 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 01:49:07.525403 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Mar 6 01:49:07.526630 systemd-logind[1442]: Removed session 24. Mar 6 01:49:11.036714 kubelet[2537]: E0306 01:49:11.036618 2537 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:49:12.530020 systemd[1]: Started sshd@24-10.0.0.156:22-10.0.0.1:54388.service - OpenSSH per-connection server daemon (10.0.0.1:54388). Mar 6 01:49:12.564899 sshd[6346]: Accepted publickey for core from 10.0.0.1 port 54388 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:49:12.565684 sshd[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:49:12.570837 systemd-logind[1442]: New session 25 of user core. Mar 6 01:49:12.576315 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 01:49:12.698214 sshd[6346]: pam_unix(sshd:session): session closed for user core Mar 6 01:49:12.702868 systemd[1]: sshd@24-10.0.0.156:22-10.0.0.1:54388.service: Deactivated successfully. Mar 6 01:49:12.705186 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 01:49:12.706174 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Mar 6 01:49:12.707644 systemd-logind[1442]: Removed session 25.