Apr 21 10:20:27.853325 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:20:27.853345 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:20:27.853354 kernel: BIOS-provided physical RAM map: Apr 21 10:20:27.853382 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 21 10:20:27.853388 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 21 10:20:27.853393 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 10:20:27.853399 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 21 10:20:27.853404 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 21 10:20:27.853408 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:20:27.853414 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 10:20:27.853419 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:20:27.853423 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 10:20:27.853427 kernel: NX (Execute Disable) protection: active Apr 21 10:20:27.853432 kernel: APIC: Static calls initialized Apr 21 10:20:27.853437 kernel: SMBIOS 2.8 present. Apr 21 10:20:27.853443 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 21 10:20:27.853448 kernel: Hypervisor detected: KVM Apr 21 10:20:27.853452 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:20:27.853457 kernel: kvm-clock: using sched offset of 5032230386 cycles Apr 21 10:20:27.853462 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:20:27.853467 kernel: tsc: Detected 2793.438 MHz processor Apr 21 10:20:27.853472 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:20:27.853478 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:20:27.853486 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 21 10:20:27.853496 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 10:20:27.853504 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:20:27.853511 kernel: Using GB pages for direct mapping Apr 21 10:20:27.853517 kernel: ACPI: Early table checksum verification disabled Apr 21 10:20:27.853525 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 21 10:20:27.853533 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:20:27.853540 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:20:27.853547 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:20:27.853555 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 21 10:20:27.853565 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:20:27.853572 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:20:27.853581 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:20:27.853590 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:20:27.853599 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 21 10:20:27.853608 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 21 10:20:27.853614 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 21 10:20:27.853621 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 21 10:20:27.853627 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 21 10:20:27.853633 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 21 10:20:27.853638 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 21 10:20:27.853643 kernel: No NUMA configuration found Apr 21 10:20:27.853648 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 21 10:20:27.853653 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 21 10:20:27.853659 kernel: Zone ranges: Apr 21 10:20:27.853664 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:20:27.853669 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 21 10:20:27.853674 kernel: Normal empty Apr 21 10:20:27.853679 kernel: Movable zone start for each node Apr 21 10:20:27.853684 kernel: Early memory node ranges Apr 21 10:20:27.853689 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 10:20:27.853694 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 21 10:20:27.853698 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 21 10:20:27.853703 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:20:27.853710 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 10:20:27.853715 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 21 10:20:27.853720 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:20:27.853726 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:20:27.853730 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:20:27.853735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:20:27.853740 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:20:27.853745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:20:27.853750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:20:27.853756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:20:27.853761 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:20:27.853766 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:20:27.853801 kernel: TSC deadline timer available Apr 21 10:20:27.853806 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 21 10:20:27.853811 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:20:27.853816 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:20:27.853821 kernel: kvm-guest: setup PV sched yield Apr 21 10:20:27.853826 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 10:20:27.853833 kernel: Booting paravirtualized kernel on KVM Apr 21 10:20:27.853838 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:20:27.853843 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 10:20:27.853848 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 21 10:20:27.853853 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 21 10:20:27.853858 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 10:20:27.853862 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:20:27.853867 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:20:27.853873 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:20:27.853880 kernel: random: crng init done Apr 21 10:20:27.853885 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:20:27.853890 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:20:27.853895 kernel: Fallback order for Node 0: 0 Apr 21 10:20:27.853900 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 21 10:20:27.853905 kernel: Policy zone: DMA32 Apr 21 10:20:27.853910 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:20:27.853915 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 137896K reserved, 0K cma-reserved) Apr 21 10:20:27.853921 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 10:20:27.853926 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:20:27.853932 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:20:27.853937 kernel: Dynamic Preempt: voluntary Apr 21 10:20:27.853942 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:20:27.853950 kernel: rcu: RCU event tracing is enabled. Apr 21 10:20:27.853955 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 10:20:27.853960 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:20:27.853965 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:20:27.853972 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:20:27.853977 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:20:27.853982 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 10:20:27.853987 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 10:20:27.853992 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:20:27.853997 kernel: Console: colour VGA+ 80x25 Apr 21 10:20:27.854002 kernel: printk: console [ttyS0] enabled Apr 21 10:20:27.854007 kernel: ACPI: Core revision 20230628 Apr 21 10:20:27.854012 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:20:27.854017 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:20:27.854023 kernel: x2apic enabled Apr 21 10:20:27.854028 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:20:27.854033 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:20:27.854038 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:20:27.854044 kernel: kvm-guest: setup PV IPIs Apr 21 10:20:27.854048 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:20:27.854054 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:20:27.854065 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 10:20:27.854071 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:20:27.854076 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 10:20:27.854082 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 10:20:27.854089 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:20:27.854094 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:20:27.854099 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:20:27.854105 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:20:27.854111 kernel: RETBleed: Vulnerable Apr 21 10:20:27.854117 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:20:27.854123 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:20:27.854128 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:20:27.854134 kernel: active return thunk: its_return_thunk Apr 21 10:20:27.854139 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:20:27.854145 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:20:27.854150 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:20:27.854156 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:20:27.854161 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:20:27.854168 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:20:27.854173 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:20:27.854179 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:20:27.854184 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:20:27.854189 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:20:27.854195 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:20:27.854200 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 10:20:27.854206 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:20:27.854211 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:20:27.854218 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:20:27.854223 kernel: landlock: Up and running. Apr 21 10:20:27.854229 kernel: SELinux: Initializing. Apr 21 10:20:27.854234 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:20:27.854240 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:20:27.854245 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 10:20:27.854251 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:20:27.854256 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:20:27.854263 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:20:27.854268 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 10:20:27.854274 kernel: signal: max sigframe size: 3632 Apr 21 10:20:27.854279 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:20:27.854285 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:20:27.854290 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:20:27.854296 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:20:27.854301 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:20:27.854307 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 10:20:27.854313 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 10:20:27.854319 kernel: smpboot: Max logical packages: 1 Apr 21 10:20:27.854324 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 10:20:27.854330 kernel: devtmpfs: initialized Apr 21 10:20:27.854335 kernel: x86/mm: Memory block size: 128MB Apr 21 10:20:27.854340 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:20:27.854346 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 10:20:27.854351 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:20:27.854357 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:20:27.854410 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:20:27.854416 kernel: audit: type=2000 audit(1776766826.734:1): state=initialized audit_enabled=0 res=1 Apr 21 10:20:27.854422 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:20:27.854427 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:20:27.854433 kernel: cpuidle: using governor menu Apr 21 10:20:27.854438 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:20:27.854443 kernel: dca service started, version 1.12.1 Apr 21 10:20:27.854449 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:20:27.854455 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:20:27.854462 kernel: PCI: Using configuration type 1 for base access Apr 21 10:20:27.854467 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:20:27.854473 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:20:27.854478 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:20:27.854484 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:20:27.854489 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:20:27.854495 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:20:27.854501 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:20:27.854511 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:20:27.854522 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:20:27.854530 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:20:27.854537 kernel: ACPI: Interpreter enabled Apr 21 10:20:27.854547 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:20:27.854557 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:20:27.854567 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:20:27.854581 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:20:27.854587 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:20:27.854593 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:20:27.854721 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:20:27.854814 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:20:27.854951 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:20:27.854969 kernel: PCI host bridge to bus 0000:00 Apr 21 10:20:27.855032 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:20:27.855085 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:20:27.855135 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:20:27.855189 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 10:20:27.855238 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:20:27.855287 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 21 10:20:27.855336 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:20:27.855464 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:20:27.855539 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:20:27.855598 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 21 10:20:27.855652 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 21 10:20:27.855706 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 21 10:20:27.855761 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:20:27.855877 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 21 10:20:27.855936 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 21 10:20:27.855992 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 21 10:20:27.856049 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 10:20:27.856110 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 21 10:20:27.856165 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 21 10:20:27.856221 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 21 10:20:27.856276 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 10:20:27.856357 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:20:27.856439 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 21 10:20:27.856497 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 21 10:20:27.856552 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 21 10:20:27.856606 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 21 10:20:27.856679 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:20:27.856801 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:20:27.856921 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:20:27.856982 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 21 10:20:27.857037 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 21 10:20:27.857110 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:20:27.857178 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 21 10:20:27.857186 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:20:27.857192 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:20:27.857197 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:20:27.857203 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:20:27.857211 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:20:27.857216 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:20:27.857222 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:20:27.857227 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:20:27.857233 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:20:27.857238 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:20:27.857243 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:20:27.857249 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:20:27.857254 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:20:27.857261 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:20:27.857267 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:20:27.857272 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:20:27.857278 kernel: iommu: Default domain type: Translated Apr 21 10:20:27.857283 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:20:27.857289 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:20:27.857294 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:20:27.857300 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 21 10:20:27.857306 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 21 10:20:27.857482 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:20:27.857548 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:20:27.857603 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:20:27.857610 kernel: vgaarb: loaded Apr 21 10:20:27.857616 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:20:27.857622 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:20:27.857627 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:20:27.857633 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:20:27.857638 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:20:27.857646 kernel: pnp: PnP ACPI init Apr 21 10:20:27.857709 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:20:27.857717 kernel: pnp: PnP ACPI: found 6 devices Apr 21 10:20:27.857723 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:20:27.857729 kernel: NET: Registered PF_INET protocol family Apr 21 10:20:27.857734 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:20:27.857740 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:20:27.857745 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:20:27.857753 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:20:27.857758 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:20:27.857764 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:20:27.857769 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:20:27.857791 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:20:27.857797 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:20:27.857803 kernel: NET: Registered PF_XDP protocol family Apr 21 10:20:27.857857 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:20:27.857910 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:20:27.857959 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:20:27.858007 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 10:20:27.858055 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:20:27.858104 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 21 10:20:27.858111 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:20:27.858116 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:20:27.858122 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:20:27.858127 kernel: Initialise system trusted keyrings Apr 21 10:20:27.858135 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:20:27.858140 kernel: Key type asymmetric registered Apr 21 10:20:27.858146 kernel: Asymmetric key parser 'x509' registered Apr 21 10:20:27.858151 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:20:27.858157 kernel: io scheduler mq-deadline registered Apr 21 10:20:27.858163 kernel: io scheduler kyber registered Apr 21 10:20:27.858168 kernel: io scheduler bfq registered Apr 21 10:20:27.858174 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:20:27.858180 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:20:27.858188 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:20:27.858193 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 10:20:27.858199 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:20:27.858204 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:20:27.858210 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:20:27.858215 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:20:27.858221 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:20:27.858309 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 10:20:27.858319 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:20:27.858450 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 10:20:27.858632 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T10:20:27 UTC (1776766827) Apr 21 10:20:27.858729 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:20:27.858738 kernel: intel_pstate: CPU model not supported Apr 21 10:20:27.858744 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:20:27.858750 kernel: Segment Routing with IPv6 Apr 21 10:20:27.858755 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:20:27.858761 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:20:27.858769 kernel: Key type dns_resolver registered Apr 21 10:20:27.858790 kernel: IPI shorthand broadcast: enabled Apr 21 10:20:27.858806 kernel: sched_clock: Marking stable (895008375, 375487598)->(1396524408, -126028435) Apr 21 10:20:27.858812 kernel: registered taskstats version 1 Apr 21 10:20:27.858817 kernel: Loading compiled-in X.509 certificates Apr 21 10:20:27.858823 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:20:27.858837 kernel: Key type .fscrypt registered Apr 21 10:20:27.858843 kernel: Key type fscrypt-provisioning registered Apr 21 10:20:27.858848 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:20:27.858856 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:20:27.858862 kernel: ima: No architecture policies found Apr 21 10:20:27.858867 kernel: clk: Disabling unused clocks Apr 21 10:20:27.858873 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:20:27.858878 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:20:27.858884 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:20:27.858889 kernel: Run /init as init process Apr 21 10:20:27.858895 kernel: with arguments: Apr 21 10:20:27.858900 kernel: /init Apr 21 10:20:27.858907 kernel: with environment: Apr 21 10:20:27.858913 kernel: HOME=/ Apr 21 10:20:27.858927 kernel: TERM=linux Apr 21 10:20:27.858935 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:20:27.858943 systemd[1]: Detected virtualization kvm. Apr 21 10:20:27.858949 systemd[1]: Detected architecture x86-64. Apr 21 10:20:27.858954 systemd[1]: Running in initrd. Apr 21 10:20:27.858960 systemd[1]: No hostname configured, using default hostname. Apr 21 10:20:27.858968 systemd[1]: Hostname set to . Apr 21 10:20:27.858974 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:20:27.858980 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:20:27.858985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:20:27.858991 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:20:27.858998 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:20:27.859004 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:20:27.859010 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:20:27.859018 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:20:27.859035 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:20:27.859041 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:20:27.859047 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:20:27.859055 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:20:27.859061 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:20:27.859067 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:20:27.859073 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:20:27.859080 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:20:27.859086 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:20:27.859092 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:20:27.859098 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:20:27.859104 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:20:27.859112 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:20:27.859118 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:20:27.859124 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:20:27.859130 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:20:27.859136 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:20:27.859142 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:20:27.859148 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:20:27.859155 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:20:27.859162 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:20:27.859168 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:20:27.859174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:20:27.859180 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:20:27.859186 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:20:27.859192 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:20:27.859214 systemd-journald[194]: Collecting audit messages is disabled. Apr 21 10:20:27.859231 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:20:27.859247 systemd-journald[194]: Journal started Apr 21 10:20:27.859262 systemd-journald[194]: Runtime Journal (/run/log/journal/610ba74f310549708b7300dd4c81d30e) is 6.0M, max 48.4M, 42.3M free. Apr 21 10:20:27.858856 systemd-modules-load[195]: Inserted module 'overlay' Apr 21 10:20:27.862242 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:20:27.865577 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:20:27.868451 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:20:27.982287 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:20:27.983012 kernel: Bridge firewalling registered Apr 21 10:20:27.888012 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 21 10:20:27.986088 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:20:27.993016 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:20:28.003693 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:20:28.029726 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:20:28.033832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:20:28.038423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:20:28.150987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:20:28.157198 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:20:28.162634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:20:28.196056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:20:28.201571 dracut-cmdline[229]: dracut-dracut-053 Apr 21 10:20:28.209167 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:20:28.207584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:20:28.233847 systemd-resolved[241]: Positive Trust Anchors: Apr 21 10:20:28.233868 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:20:28.233893 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:20:28.235960 systemd-resolved[241]: Defaulting to hostname 'linux'. Apr 21 10:20:28.237060 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:20:28.238804 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:20:28.296476 kernel: SCSI subsystem initialized Apr 21 10:20:28.304388 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:20:28.314455 kernel: iscsi: registered transport (tcp) Apr 21 10:20:28.332973 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:20:28.333100 kernel: QLogic iSCSI HBA Driver Apr 21 10:20:28.365183 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:20:28.383738 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:20:28.405846 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:20:28.405959 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:20:28.405979 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:20:28.446551 kernel: raid6: avx512x4 gen() 45685 MB/s Apr 21 10:20:28.463510 kernel: raid6: avx512x2 gen() 44195 MB/s Apr 21 10:20:28.480489 kernel: raid6: avx512x1 gen() 44236 MB/s Apr 21 10:20:28.497490 kernel: raid6: avx2x4 gen() 36782 MB/s Apr 21 10:20:28.514478 kernel: raid6: avx2x2 gen() 36353 MB/s Apr 21 10:20:28.531866 kernel: raid6: avx2x1 gen() 27670 MB/s Apr 21 10:20:28.531911 kernel: raid6: using algorithm avx512x4 gen() 45685 MB/s Apr 21 10:20:28.549838 kernel: raid6: .... xor() 10401 MB/s, rmw enabled Apr 21 10:20:28.549856 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:20:28.569473 kernel: xor: automatically using best checksumming function avx Apr 21 10:20:28.704466 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:20:28.715138 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:20:28.729034 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:20:28.738437 systemd-udevd[418]: Using default interface naming scheme 'v255'. Apr 21 10:20:28.741214 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:20:28.744512 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:20:28.757096 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Apr 21 10:20:28.781152 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:20:28.795715 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:20:28.827712 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:20:28.836614 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:20:28.845039 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:20:28.848612 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:20:28.850287 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:20:28.859653 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 10:20:28.862424 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:20:28.852445 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:20:28.867154 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 10:20:28.867545 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:20:28.875978 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:20:28.875995 kernel: GPT:9289727 != 19775487 Apr 21 10:20:28.876003 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:20:28.876010 kernel: GPT:9289727 != 19775487 Apr 21 10:20:28.876017 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:20:28.876024 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:20:28.875000 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:20:28.882189 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:20:28.882482 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:20:28.890981 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:20:28.895477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:20:28.895564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:20:28.897149 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:20:28.917127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:20:28.925412 kernel: libata version 3.00 loaded. Apr 21 10:20:28.927594 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:20:28.930414 kernel: AES CTR mode by8 optimization enabled Apr 21 10:20:28.934429 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:20:28.937455 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:20:28.940460 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:20:28.940600 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:20:28.948429 kernel: scsi host0: ahci Apr 21 10:20:28.948559 kernel: scsi host1: ahci Apr 21 10:20:28.949504 kernel: scsi host2: ahci Apr 21 10:20:28.950435 kernel: scsi host3: ahci Apr 21 10:20:28.952543 kernel: scsi host4: ahci Apr 21 10:20:28.955446 kernel: scsi host5: ahci Apr 21 10:20:28.955527 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 21 10:20:28.955535 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 21 10:20:28.955543 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 21 10:20:28.955549 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 21 10:20:28.955556 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 21 10:20:28.955568 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 21 10:20:28.960195 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Apr 21 10:20:28.972986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:20:29.057251 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (475) Apr 21 10:20:29.061502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:20:29.065285 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 10:20:29.068899 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 10:20:29.075545 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 10:20:29.075640 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 10:20:29.096966 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:20:29.102054 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:20:29.106257 disk-uuid[555]: Primary Header is updated. Apr 21 10:20:29.106257 disk-uuid[555]: Secondary Entries is updated. Apr 21 10:20:29.106257 disk-uuid[555]: Secondary Header is updated. Apr 21 10:20:29.112401 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:20:29.115825 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:20:29.135110 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:20:29.265483 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:20:29.274488 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:20:29.274625 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:20:29.275394 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:20:29.277428 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:20:29.278405 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:20:29.279411 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:20:29.281537 kernel: ata3.00: applying bridge limits Apr 21 10:20:29.282522 kernel: ata3.00: configured for UDMA/100 Apr 21 10:20:29.283421 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:20:29.328196 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:20:29.328533 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:20:29.343421 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:20:30.118873 disk-uuid[556]: The operation has completed successfully. Apr 21 10:20:30.120897 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:20:30.135688 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:20:30.135782 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:20:30.165854 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:20:30.171903 sh[591]: Success Apr 21 10:20:30.186383 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:20:30.214318 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:20:30.232754 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:20:30.234681 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:20:30.247941 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:20:30.248086 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:20:30.248095 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:20:30.249201 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:20:30.251126 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:20:30.255765 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:20:30.256334 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:20:30.266852 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:20:30.267909 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:20:30.285118 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:20:30.285315 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:20:30.285326 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:20:30.290415 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:20:30.297894 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:20:30.301460 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:20:30.307592 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:20:30.312955 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:20:30.358744 ignition[693]: Ignition 2.19.0 Apr 21 10:20:30.358761 ignition[693]: Stage: fetch-offline Apr 21 10:20:30.358786 ignition[693]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:20:30.358809 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:20:30.358890 ignition[693]: parsed url from cmdline: "" Apr 21 10:20:30.358892 ignition[693]: no config URL provided Apr 21 10:20:30.358896 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:20:30.358901 ignition[693]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:20:30.358921 ignition[693]: op(1): [started] loading QEMU firmware config module Apr 21 10:20:30.358924 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 10:20:30.367883 ignition[693]: op(1): [finished] loading QEMU firmware config module Apr 21 10:20:30.374351 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:20:30.386563 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:20:30.402357 systemd-networkd[779]: lo: Link UP Apr 21 10:20:30.402413 systemd-networkd[779]: lo: Gained carrier Apr 21 10:20:30.403262 systemd-networkd[779]: Enumeration completed Apr 21 10:20:30.403528 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:20:30.403692 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:20:30.403695 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:20:30.404563 systemd-networkd[779]: eth0: Link UP Apr 21 10:20:30.404565 systemd-networkd[779]: eth0: Gained carrier Apr 21 10:20:30.404571 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:20:30.405639 systemd[1]: Reached target network.target - Network. Apr 21 10:20:30.425698 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:20:30.485691 ignition[693]: parsing config with SHA512: 3be23d09fb2c801893d884cbe9ea6e12758ab45250786ce89ab63803fedf8b3efd43e9b83faeb637567070769ad49df4814aac1567b17f84592f7a6926446181 Apr 21 10:20:30.489508 unknown[693]: fetched base config from "system" Apr 21 10:20:30.489521 unknown[693]: fetched user config from "qemu" Apr 21 10:20:30.490186 ignition[693]: fetch-offline: fetch-offline passed Apr 21 10:20:30.490288 ignition[693]: Ignition finished successfully Apr 21 10:20:30.493252 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:20:30.494480 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 10:20:30.504572 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:20:30.544646 ignition[783]: Ignition 2.19.0 Apr 21 10:20:30.544663 ignition[783]: Stage: kargs Apr 21 10:20:30.544848 ignition[783]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:20:30.544856 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:20:30.545584 ignition[783]: kargs: kargs passed Apr 21 10:20:30.545619 ignition[783]: Ignition finished successfully Apr 21 10:20:30.549507 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:20:30.559500 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:20:30.575840 ignition[791]: Ignition 2.19.0 Apr 21 10:20:30.575855 ignition[791]: Stage: disks Apr 21 10:20:30.575990 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:20:30.575998 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:20:30.576609 ignition[791]: disks: disks passed Apr 21 10:20:30.576640 ignition[791]: Ignition finished successfully Apr 21 10:20:30.583294 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:20:30.587189 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:20:30.591551 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:20:30.591733 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:20:30.595475 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:20:30.595904 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:20:30.612552 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:20:30.623247 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:20:30.627387 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:20:30.632149 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:20:30.799398 kernel: EXT4-fs (vda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:20:30.799433 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:20:30.799941 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:20:30.817531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:20:30.819080 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:20:30.826561 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Apr 21 10:20:30.826593 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:20:30.826603 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:20:30.822310 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:20:30.834962 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:20:30.834980 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:20:30.822344 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:20:30.822390 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:20:30.833606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:20:30.836217 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:20:30.845500 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:20:30.895170 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:20:30.898564 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:20:30.902227 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:20:30.905359 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:20:30.974202 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:20:30.983502 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:20:30.987397 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:20:30.991469 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:20:31.024688 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:20:31.028123 ignition[924]: INFO : Ignition 2.19.0 Apr 21 10:20:31.028123 ignition[924]: INFO : Stage: mount Apr 21 10:20:31.031656 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:20:31.031656 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:20:31.031656 ignition[924]: INFO : mount: mount passed Apr 21 10:20:31.031656 ignition[924]: INFO : Ignition finished successfully Apr 21 10:20:31.029605 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:20:31.043885 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:20:31.246100 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:20:31.259987 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:20:31.269375 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Apr 21 10:20:31.269430 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:20:31.269440 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:20:31.271498 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:20:31.274431 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:20:31.275543 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:20:31.296814 ignition[955]: INFO : Ignition 2.19.0 Apr 21 10:20:31.296814 ignition[955]: INFO : Stage: files Apr 21 10:20:31.301089 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:20:31.301089 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:20:31.301089 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:20:31.301089 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:20:31.301089 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:20:31.311220 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:20:31.311220 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:20:31.311220 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:20:31.311220 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:20:31.311220 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:20:31.303687 unknown[955]: wrote ssh authorized keys file for user: core Apr 21 10:20:31.340851 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:20:31.418261 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:20:31.418261 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:20:31.424469 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 21 10:20:31.796559 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:20:32.199694 systemd-networkd[779]: eth0: Gained IPv6LL Apr 21 10:20:33.273459 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:20:33.273459 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:20:33.279032 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:20:33.279032 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:20:33.279032 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:20:33.279032 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 10:20:33.279032 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:20:33.279032 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:20:33.279032 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 10:20:33.279032 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 10:20:33.300088 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:20:33.308474 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:20:33.310562 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 10:20:33.310562 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:20:33.310562 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:20:33.310562 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:20:33.310562 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:20:33.310562 ignition[955]: INFO : files: files passed Apr 21 10:20:33.310562 ignition[955]: INFO : Ignition finished successfully Apr 21 10:20:33.315292 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:20:33.335828 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:20:33.339502 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:20:33.342905 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:20:33.344188 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:20:33.347224 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 10:20:33.349523 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:20:33.349523 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:20:33.355135 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:20:33.350924 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:20:33.352242 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:20:33.367480 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:20:33.391853 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:20:33.391955 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:20:33.393472 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:20:33.398975 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:20:33.399079 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:20:33.399705 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:20:33.418227 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:20:33.426531 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:20:33.437026 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:20:33.437163 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:20:33.440444 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:20:33.443302 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:20:33.443406 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:20:33.450062 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:20:33.450182 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:20:33.452708 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:20:33.454919 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:20:33.459014 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:20:33.460663 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:20:33.463214 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:20:33.465738 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:20:33.468833 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:20:33.471308 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:20:33.473741 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:20:33.473853 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:20:33.478351 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:20:33.479690 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:20:33.482332 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:20:33.486427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:20:33.486559 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:20:33.486652 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:20:33.493639 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:20:33.493792 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:20:33.497647 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:20:33.498835 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:20:33.504478 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:20:33.504665 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:20:33.508066 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:20:33.510247 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:20:33.510325 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:20:33.513659 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:20:33.513734 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:20:33.518314 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:20:33.518456 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:20:33.519663 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:20:33.519746 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:20:33.531551 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:20:33.534517 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:20:33.534596 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:20:33.534677 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:20:33.541905 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:20:33.543282 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:20:33.548972 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:20:33.549051 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:20:33.555156 ignition[1009]: INFO : Ignition 2.19.0 Apr 21 10:20:33.555156 ignition[1009]: INFO : Stage: umount Apr 21 10:20:33.555156 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:20:33.555156 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:20:33.555156 ignition[1009]: INFO : umount: umount passed Apr 21 10:20:33.555156 ignition[1009]: INFO : Ignition finished successfully Apr 21 10:20:33.556554 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:20:33.556992 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:20:33.557073 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:20:33.562163 systemd[1]: Stopped target network.target - Network. Apr 21 10:20:33.565420 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:20:33.565479 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:20:33.566963 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:20:33.566995 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:20:33.569949 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:20:33.569979 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:20:33.572533 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:20:33.572565 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:20:33.575346 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:20:33.577689 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:20:33.583439 systemd-networkd[779]: eth0: DHCPv6 lease lost Apr 21 10:20:33.588505 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:20:33.588604 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:20:33.593867 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:20:33.593904 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:20:33.602520 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:20:33.603720 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:20:33.604008 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:20:33.605442 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:20:33.612580 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:20:33.612665 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:20:33.616112 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:20:33.616575 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:20:33.629614 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:20:33.629740 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:20:33.633075 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:20:33.633122 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:20:33.637709 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:20:33.637740 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:20:33.638698 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:20:33.638743 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:20:33.644434 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:20:33.644470 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:20:33.648944 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:20:33.649011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:20:33.653071 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:20:33.653105 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:20:33.666521 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:20:33.666606 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:20:33.666650 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:20:33.671238 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:20:33.671273 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:20:33.674193 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:20:33.674228 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:20:33.674705 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:20:33.674735 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:20:33.680656 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:20:33.680694 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:20:33.683185 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:20:33.683217 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:20:33.684586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:20:33.684616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:20:33.687990 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:20:33.688080 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:20:33.690682 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:20:33.690747 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:20:33.695718 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:20:33.709743 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:20:33.716549 systemd[1]: Switching root. Apr 21 10:20:33.743807 systemd-journald[194]: Journal stopped Apr 21 10:20:34.612149 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 21 10:20:34.612198 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:20:34.612212 kernel: SELinux: policy capability open_perms=1 Apr 21 10:20:34.612220 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:20:34.612227 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:20:34.612234 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:20:34.612243 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:20:34.612251 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:20:34.612263 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:20:34.612270 kernel: audit: type=1403 audit(1776766833.861:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:20:34.612279 systemd[1]: Successfully loaded SELinux policy in 41.343ms. Apr 21 10:20:34.612293 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.861ms. Apr 21 10:20:34.612301 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:20:34.612309 systemd[1]: Detected virtualization kvm. Apr 21 10:20:34.612317 systemd[1]: Detected architecture x86-64. Apr 21 10:20:34.612326 systemd[1]: Detected first boot. Apr 21 10:20:34.612334 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:20:34.612348 zram_generator::config[1052]: No configuration found. Apr 21 10:20:34.612357 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:20:34.612399 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:20:34.612408 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:20:34.612420 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:20:34.612428 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:20:34.612438 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:20:34.612446 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:20:34.612454 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:20:34.612461 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:20:34.612469 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:20:34.612478 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:20:34.612489 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:20:34.612497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:20:34.612505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:20:34.612515 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:20:34.612523 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:20:34.612530 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:20:34.612538 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:20:34.612545 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:20:34.612554 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:20:34.612563 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:20:34.612570 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:20:34.612580 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:20:34.612588 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:20:34.612595 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:20:34.612603 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:20:34.612611 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:20:34.612619 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:20:34.612626 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:20:34.612634 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:20:34.612644 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:20:34.612651 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:20:34.612659 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:20:34.612666 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:20:34.612675 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:20:34.612682 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:20:34.612690 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:20:34.612697 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:20:34.612705 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:20:34.612713 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:20:34.612721 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:20:34.612730 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:20:34.612737 systemd[1]: Reached target machines.target - Containers. Apr 21 10:20:34.612746 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:20:34.612754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:20:34.612762 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:20:34.612770 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:20:34.612777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:20:34.612787 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:20:34.612795 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:20:34.612802 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:20:34.612811 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:20:34.612832 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:20:34.612840 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:20:34.612848 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:20:34.612856 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:20:34.612866 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:20:34.612873 kernel: fuse: init (API version 7.39) Apr 21 10:20:34.612880 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:20:34.612888 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:20:34.612895 kernel: ACPI: bus type drm_connector registered Apr 21 10:20:34.612902 kernel: loop: module loaded Apr 21 10:20:34.612911 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:20:34.612929 systemd-journald[1129]: Collecting audit messages is disabled. Apr 21 10:20:34.612947 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:20:34.612956 systemd-journald[1129]: Journal started Apr 21 10:20:34.612972 systemd-journald[1129]: Runtime Journal (/run/log/journal/610ba74f310549708b7300dd4c81d30e) is 6.0M, max 48.4M, 42.3M free. Apr 21 10:20:34.230387 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:20:34.250187 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 10:20:34.250577 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:20:34.617386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:20:34.620126 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:20:34.620147 systemd[1]: Stopped verity-setup.service. Apr 21 10:20:34.624440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:20:34.627461 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:20:34.629437 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:20:34.631100 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:20:34.632793 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:20:34.634214 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:20:34.635719 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:20:34.637256 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:20:34.638732 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:20:34.641514 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:20:34.643661 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:20:34.643869 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:20:34.645603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:20:34.645722 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:20:34.647572 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:20:34.647680 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:20:34.649247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:20:34.649346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:20:34.651241 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:20:34.651356 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:20:34.652995 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:20:34.653107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:20:34.655809 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:20:34.657615 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:20:34.659569 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:20:34.676514 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:20:34.690884 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:20:34.697284 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:20:34.699886 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:20:34.699937 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:20:34.704106 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:20:34.727975 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:20:34.732390 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:20:34.734229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:20:34.735395 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:20:34.739546 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:20:34.741342 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:20:34.742303 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:20:34.743800 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:20:34.748732 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:20:34.749485 systemd-journald[1129]: Time spent on flushing to /var/log/journal/610ba74f310549708b7300dd4c81d30e is 10.661ms for 950 entries. Apr 21 10:20:34.749485 systemd-journald[1129]: System Journal (/var/log/journal/610ba74f310549708b7300dd4c81d30e) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:20:34.775705 systemd-journald[1129]: Received client request to flush runtime journal. Apr 21 10:20:34.755935 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:20:34.759213 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:20:34.764456 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:20:34.766347 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:20:34.768539 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:20:34.770608 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:20:34.780600 kernel: loop0: detected capacity change from 0 to 217752 Apr 21 10:20:34.781875 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:20:34.784023 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:20:34.786121 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:20:34.788867 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:20:34.796277 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:20:34.833580 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:20:34.849598 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:20:34.863656 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Apr 21 10:20:34.863679 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Apr 21 10:20:34.866283 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:20:34.883091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:20:34.903497 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:20:34.910139 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:20:34.910787 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:20:34.917001 kernel: loop1: detected capacity change from 0 to 140768 Apr 21 10:20:35.008510 kernel: loop2: detected capacity change from 0 to 142488 Apr 21 10:20:35.012523 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:20:35.025274 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:20:35.148135 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Apr 21 10:20:35.148174 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Apr 21 10:20:35.149952 kernel: loop3: detected capacity change from 0 to 217752 Apr 21 10:20:35.153350 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:20:35.172424 kernel: loop4: detected capacity change from 0 to 140768 Apr 21 10:20:35.185440 kernel: loop5: detected capacity change from 0 to 142488 Apr 21 10:20:35.212059 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 10:20:35.212578 (sd-merge)[1194]: Merged extensions into '/usr'. Apr 21 10:20:35.217059 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:20:35.217076 systemd[1]: Reloading... Apr 21 10:20:35.465411 zram_generator::config[1217]: No configuration found. Apr 21 10:20:35.594222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:20:35.629330 systemd[1]: Reloading finished in 411 ms. Apr 21 10:20:35.631140 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:20:35.657198 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:20:35.659429 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:20:35.735276 systemd[1]: Starting ensure-sysext.service... Apr 21 10:20:35.738936 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:20:35.744481 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:20:35.744494 systemd[1]: Reloading... Apr 21 10:20:35.770170 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:20:35.770507 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:20:35.771013 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:20:35.771173 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 21 10:20:35.771210 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 21 10:20:35.776880 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:20:35.776888 systemd-tmpfiles[1259]: Skipping /boot Apr 21 10:20:35.786537 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:20:35.786558 systemd-tmpfiles[1259]: Skipping /boot Apr 21 10:20:35.793411 zram_generator::config[1283]: No configuration found. Apr 21 10:20:35.945269 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:20:35.980768 systemd[1]: Reloading finished in 236 ms. Apr 21 10:20:36.007308 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:20:36.024699 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:20:36.034590 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:20:36.039160 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:20:36.042581 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:20:36.048657 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:20:36.055511 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:20:36.060695 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:20:36.066563 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:20:36.066741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:20:36.078954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:20:36.133033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:20:36.147423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:20:36.150017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:20:36.163644 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:20:36.167443 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:20:36.171989 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:20:36.177464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:20:36.181414 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Apr 21 10:20:36.186529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:20:36.189787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:20:36.189941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:20:36.191952 augenrules[1349]: No rules Apr 21 10:20:36.193442 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:20:36.196308 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:20:36.196449 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:20:36.210160 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:20:36.214238 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:20:36.217036 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:20:36.236981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:20:36.237220 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:20:36.244926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:20:36.252924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:20:36.261249 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:20:36.263101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:20:36.270174 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:20:36.278860 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:20:36.281297 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:20:36.281942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:20:36.284167 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:20:36.294247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:20:36.294499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:20:36.331755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:20:36.331881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:20:36.337414 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1362) Apr 21 10:20:36.337099 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:20:36.337214 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:20:36.344247 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:20:36.346040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:20:36.349476 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:20:36.361866 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:20:36.367599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:20:36.370639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:20:36.372999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:20:36.373139 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:20:36.373205 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:20:36.374077 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:20:36.376755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:20:36.376902 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:20:36.379588 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:20:36.380262 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:20:36.386659 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:20:36.386774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:20:36.395794 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:20:36.396007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:20:36.405599 systemd[1]: Finished ensure-sysext.service. Apr 21 10:20:36.410678 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:20:36.410749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:20:36.459299 systemd-networkd[1383]: lo: Link UP Apr 21 10:20:36.459315 systemd-networkd[1383]: lo: Gained carrier Apr 21 10:20:36.460196 systemd-networkd[1383]: Enumeration completed Apr 21 10:20:36.470015 systemd-resolved[1328]: Positive Trust Anchors: Apr 21 10:20:36.470022 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:20:36.470032 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:20:36.470048 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:20:36.471201 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:20:36.471215 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:20:36.471823 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:20:36.471862 systemd-networkd[1383]: eth0: Link UP Apr 21 10:20:36.471865 systemd-networkd[1383]: eth0: Gained carrier Apr 21 10:20:36.471871 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:20:36.472003 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:20:36.473532 systemd-resolved[1328]: Defaulting to hostname 'linux'. Apr 21 10:20:36.474760 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:20:36.478072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:20:36.480103 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:20:36.482856 systemd[1]: Reached target network.target - Network. Apr 21 10:20:36.486045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:20:36.488524 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:20:36.491421 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:20:36.496573 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:20:36.498684 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:20:36.501499 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:20:36.517907 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:20:36.518207 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:20:36.518306 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:20:36.520224 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:20:36.551476 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:20:36.555870 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:20:37.056849 systemd-resolved[1328]: Clock change detected. Flushing caches. Apr 21 10:20:37.057071 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 10:20:37.057153 systemd-timesyncd[1406]: Initial clock synchronization to Tue 2026-04-21 10:20:37.056690 UTC. Apr 21 10:20:37.075222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:20:37.088978 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 21 10:20:37.094216 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:20:37.264653 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:20:37.292369 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:20:37.294309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:20:37.306844 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:20:37.346666 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:20:37.349044 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:20:37.350809 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:20:37.352594 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:20:37.354642 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:20:37.356881 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:20:37.358735 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:20:37.360768 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:20:37.364180 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:20:37.364237 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:20:37.366619 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:20:37.369130 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:20:37.372394 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:20:37.381726 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:20:37.386746 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:20:37.390162 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:20:37.392789 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:20:37.401082 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:20:37.403761 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:20:37.403798 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:20:37.416610 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:20:37.437727 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:20:37.443680 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:20:37.448771 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:20:37.452046 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:20:37.453772 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:20:37.457043 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:20:37.458969 jq[1435]: false Apr 21 10:20:37.459824 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:20:37.462957 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:20:37.466292 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:20:37.470843 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:20:37.473811 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:20:37.474181 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:20:37.477178 dbus-daemon[1434]: [system] SELinux support is enabled Apr 21 10:20:37.478051 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:20:37.482059 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:20:37.487878 extend-filesystems[1436]: Found loop3 Apr 21 10:20:37.487878 extend-filesystems[1436]: Found loop4 Apr 21 10:20:37.487878 extend-filesystems[1436]: Found loop5 Apr 21 10:20:37.487878 extend-filesystems[1436]: Found sr0 Apr 21 10:20:37.487878 extend-filesystems[1436]: Found vda Apr 21 10:20:37.487878 extend-filesystems[1436]: Found vda1 Apr 21 10:20:37.487878 extend-filesystems[1436]: Found vda2 Apr 21 10:20:37.487878 extend-filesystems[1436]: Found vda3 Apr 21 10:20:37.487878 extend-filesystems[1436]: Found usr Apr 21 10:20:37.487878 extend-filesystems[1436]: Found vda4 Apr 21 10:20:37.487878 extend-filesystems[1436]: Found vda6 Apr 21 10:20:37.487878 extend-filesystems[1436]: Found vda7 Apr 21 10:20:37.487878 extend-filesystems[1436]: Found vda9 Apr 21 10:20:37.487878 extend-filesystems[1436]: Checking size of /dev/vda9 Apr 21 10:20:37.484818 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:20:37.519205 update_engine[1446]: I20260421 10:20:37.493010 1446 main.cc:92] Flatcar Update Engine starting Apr 21 10:20:37.519205 update_engine[1446]: I20260421 10:20:37.494398 1446 update_check_scheduler.cc:74] Next update check in 2m50s Apr 21 10:20:37.490106 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:20:37.525341 extend-filesystems[1436]: Resized partition /dev/vda9 Apr 21 10:20:37.528945 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 10:20:37.529275 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1364) Apr 21 10:20:37.495147 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:20:37.529461 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:20:37.538784 jq[1449]: true Apr 21 10:20:37.495830 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:20:37.496342 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:20:37.539207 jq[1456]: true Apr 21 10:20:37.496491 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:20:37.501137 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:20:37.501279 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:20:37.528106 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:20:37.529411 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:20:37.529599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:20:37.532137 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:20:37.532157 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:20:37.544572 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:20:37.555058 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:20:37.558344 tar[1455]: linux-amd64/LICENSE Apr 21 10:20:37.558344 tar[1455]: linux-amd64/helm Apr 21 10:20:37.667298 kernel: hrtimer: interrupt took 4306323 ns Apr 21 10:20:37.831995 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 10:20:37.853468 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:20:37.853494 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:20:37.854486 systemd-logind[1444]: New seat seat0. Apr 21 10:20:37.858080 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 10:20:37.858080 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 10:20:37.858080 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 10:20:38.000135 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Apr 21 10:20:38.002924 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:20:37.859848 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:20:37.860037 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:20:37.992541 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:20:37.993899 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 10:20:38.001138 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:20:38.002188 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:20:38.034288 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:20:38.130185 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:20:38.142590 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:20:38.150422 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:20:38.150605 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:20:38.159236 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:20:38.181070 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:20:38.190436 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:20:38.295757 systemd-networkd[1383]: eth0: Gained IPv6LL Apr 21 10:20:38.332639 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:20:38.334482 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:20:38.336280 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:20:38.339178 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:20:38.349199 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 10:20:38.352758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:20:38.364170 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:20:38.384516 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:20:38.440083 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 10:20:38.440291 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 10:20:38.442357 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:20:38.620840 containerd[1457]: time="2026-04-21T10:20:38.620607245Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:20:38.723700 containerd[1457]: time="2026-04-21T10:20:38.723397899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:20:38.726090 containerd[1457]: time="2026-04-21T10:20:38.726007181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:20:38.726090 containerd[1457]: time="2026-04-21T10:20:38.726075985Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:20:38.726225 containerd[1457]: time="2026-04-21T10:20:38.726113969Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:20:38.726318 containerd[1457]: time="2026-04-21T10:20:38.726297336Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:20:38.726334 containerd[1457]: time="2026-04-21T10:20:38.726323665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:20:38.726426 containerd[1457]: time="2026-04-21T10:20:38.726381556Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:20:38.726454 containerd[1457]: time="2026-04-21T10:20:38.726422168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:20:38.726678 containerd[1457]: time="2026-04-21T10:20:38.726641110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:20:38.726696 containerd[1457]: time="2026-04-21T10:20:38.726680390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:20:38.726696 containerd[1457]: time="2026-04-21T10:20:38.726691636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:20:38.726734 containerd[1457]: time="2026-04-21T10:20:38.726699901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:20:38.726785 containerd[1457]: time="2026-04-21T10:20:38.726765626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:20:38.727025 containerd[1457]: time="2026-04-21T10:20:38.727002492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:20:38.727122 containerd[1457]: time="2026-04-21T10:20:38.727100466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:20:38.727140 containerd[1457]: time="2026-04-21T10:20:38.727124159Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:20:38.727208 containerd[1457]: time="2026-04-21T10:20:38.727188936Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:20:38.727267 containerd[1457]: time="2026-04-21T10:20:38.727248861Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:20:38.734145 containerd[1457]: time="2026-04-21T10:20:38.733929325Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:20:38.734325 containerd[1457]: time="2026-04-21T10:20:38.734177952Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:20:38.734325 containerd[1457]: time="2026-04-21T10:20:38.734256910Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:20:38.734325 containerd[1457]: time="2026-04-21T10:20:38.734282824Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:20:38.734325 containerd[1457]: time="2026-04-21T10:20:38.734319998Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:20:38.734558 containerd[1457]: time="2026-04-21T10:20:38.734532827Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:20:38.735212 containerd[1457]: time="2026-04-21T10:20:38.735191065Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:20:38.735352 containerd[1457]: time="2026-04-21T10:20:38.735329838Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:20:38.735412 containerd[1457]: time="2026-04-21T10:20:38.735352528Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:20:38.735412 containerd[1457]: time="2026-04-21T10:20:38.735362728Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:20:38.735412 containerd[1457]: time="2026-04-21T10:20:38.735373892Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:20:38.735412 containerd[1457]: time="2026-04-21T10:20:38.735406405Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:20:38.735462 containerd[1457]: time="2026-04-21T10:20:38.735423836Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:20:38.735462 containerd[1457]: time="2026-04-21T10:20:38.735435329Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:20:38.735462 containerd[1457]: time="2026-04-21T10:20:38.735454290Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:20:38.735513 containerd[1457]: time="2026-04-21T10:20:38.735464899Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:20:38.735513 containerd[1457]: time="2026-04-21T10:20:38.735474239Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:20:38.735513 containerd[1457]: time="2026-04-21T10:20:38.735482905Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:20:38.735550 containerd[1457]: time="2026-04-21T10:20:38.735519282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735550 containerd[1457]: time="2026-04-21T10:20:38.735537675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735585 containerd[1457]: time="2026-04-21T10:20:38.735559004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735585 containerd[1457]: time="2026-04-21T10:20:38.735570506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735623 containerd[1457]: time="2026-04-21T10:20:38.735587582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735665 containerd[1457]: time="2026-04-21T10:20:38.735641779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735679 containerd[1457]: time="2026-04-21T10:20:38.735666287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735704 containerd[1457]: time="2026-04-21T10:20:38.735677098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735704 containerd[1457]: time="2026-04-21T10:20:38.735687502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735704 containerd[1457]: time="2026-04-21T10:20:38.735697942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735741 containerd[1457]: time="2026-04-21T10:20:38.735725017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735741 containerd[1457]: time="2026-04-21T10:20:38.735734544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735776 containerd[1457]: time="2026-04-21T10:20:38.735772393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735790 containerd[1457]: time="2026-04-21T10:20:38.735784085Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:20:38.735822 containerd[1457]: time="2026-04-21T10:20:38.735809172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735822 containerd[1457]: time="2026-04-21T10:20:38.735819293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.735849 containerd[1457]: time="2026-04-21T10:20:38.735829302Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:20:38.735963 containerd[1457]: time="2026-04-21T10:20:38.735952381Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:20:38.735992 containerd[1457]: time="2026-04-21T10:20:38.735976807Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:20:38.735992 containerd[1457]: time="2026-04-21T10:20:38.735985708Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:20:38.736038 containerd[1457]: time="2026-04-21T10:20:38.736000454Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:20:38.736038 containerd[1457]: time="2026-04-21T10:20:38.736007842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.736038 containerd[1457]: time="2026-04-21T10:20:38.736016618Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:20:38.736075 containerd[1457]: time="2026-04-21T10:20:38.736037386Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:20:38.736075 containerd[1457]: time="2026-04-21T10:20:38.736052066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:20:38.737483 containerd[1457]: time="2026-04-21T10:20:38.736708672Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:20:38.738020 containerd[1457]: time="2026-04-21T10:20:38.737615311Z" level=info msg="Connect containerd service" Apr 21 10:20:38.738039 containerd[1457]: time="2026-04-21T10:20:38.738021756Z" level=info msg="using legacy CRI server" Apr 21 10:20:38.738053 containerd[1457]: time="2026-04-21T10:20:38.738039516Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:20:38.738397 containerd[1457]: time="2026-04-21T10:20:38.738351599Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:20:38.739375 containerd[1457]: time="2026-04-21T10:20:38.739334006Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:20:38.739715 containerd[1457]: time="2026-04-21T10:20:38.739582499Z" level=info msg="Start subscribing containerd event" Apr 21 10:20:38.739772 containerd[1457]: time="2026-04-21T10:20:38.739696592Z" level=info msg="Start recovering state" Apr 21 10:20:38.739999 containerd[1457]: time="2026-04-21T10:20:38.739959129Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:20:38.740024 containerd[1457]: time="2026-04-21T10:20:38.740015263Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:20:38.741429 containerd[1457]: time="2026-04-21T10:20:38.741266053Z" level=info msg="Start event monitor" Apr 21 10:20:38.741429 containerd[1457]: time="2026-04-21T10:20:38.741315646Z" level=info msg="Start snapshots syncer" Apr 21 10:20:38.741429 containerd[1457]: time="2026-04-21T10:20:38.741347748Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:20:38.741429 containerd[1457]: time="2026-04-21T10:20:38.741360442Z" level=info msg="Start streaming server" Apr 21 10:20:38.743117 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:20:38.743479 containerd[1457]: time="2026-04-21T10:20:38.743436771Z" level=info msg="containerd successfully booted in 0.125127s" Apr 21 10:20:39.124964 tar[1455]: linux-amd64/README.md Apr 21 10:20:39.188294 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:20:40.462667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:20:40.464656 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:20:40.466353 systemd[1]: Startup finished in 1.014s (kernel) + 6.180s (initrd) + 6.154s (userspace) = 13.349s. Apr 21 10:20:40.504802 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:20:41.476733 kubelet[1547]: E0421 10:20:41.476597 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:20:41.479235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:20:41.479354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:20:41.479676 systemd[1]: kubelet.service: Consumed 2.665s CPU time. Apr 21 10:20:41.827703 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:20:41.828951 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:51198.service - OpenSSH per-connection server daemon (10.0.0.1:51198). Apr 21 10:20:41.896052 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 51198 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:41.900110 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:41.910761 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:20:41.920180 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:20:41.922121 systemd-logind[1444]: New session 1 of user core. Apr 21 10:20:41.931713 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:20:41.935599 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:20:41.942350 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:20:42.018053 systemd[1564]: Queued start job for default target default.target. Apr 21 10:20:42.030809 systemd[1564]: Created slice app.slice - User Application Slice. Apr 21 10:20:42.030851 systemd[1564]: Reached target paths.target - Paths. Apr 21 10:20:42.030862 systemd[1564]: Reached target timers.target - Timers. Apr 21 10:20:42.032336 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:20:42.043269 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:20:42.043375 systemd[1564]: Reached target sockets.target - Sockets. Apr 21 10:20:42.043386 systemd[1564]: Reached target basic.target - Basic System. Apr 21 10:20:42.044422 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:20:42.044633 systemd[1564]: Reached target default.target - Main User Target. Apr 21 10:20:42.044691 systemd[1564]: Startup finished in 97ms. Apr 21 10:20:42.045777 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:20:42.126608 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:51204.service - OpenSSH per-connection server daemon (10.0.0.1:51204). Apr 21 10:20:42.160114 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 51204 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:42.162020 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:42.166403 systemd-logind[1444]: New session 2 of user core. Apr 21 10:20:42.185311 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:20:42.242778 sshd[1575]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:42.257240 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:51204.service: Deactivated successfully. Apr 21 10:20:42.258645 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:20:42.259836 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:20:42.274095 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:51210.service - OpenSSH per-connection server daemon (10.0.0.1:51210). Apr 21 10:20:42.276534 systemd-logind[1444]: Removed session 2. Apr 21 10:20:42.306898 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 51210 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:42.308557 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:42.312529 systemd-logind[1444]: New session 3 of user core. Apr 21 10:20:42.325562 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:20:42.446991 sshd[1582]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:42.468448 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:51210.service: Deactivated successfully. Apr 21 10:20:42.471744 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:20:42.475540 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:20:42.487930 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:51212.service - OpenSSH per-connection server daemon (10.0.0.1:51212). Apr 21 10:20:42.488787 systemd-logind[1444]: Removed session 3. Apr 21 10:20:42.537498 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 51212 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:42.540728 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:42.545292 systemd-logind[1444]: New session 4 of user core. Apr 21 10:20:42.565771 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:20:42.628606 sshd[1589]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:42.646095 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:51212.service: Deactivated successfully. Apr 21 10:20:42.647810 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:20:42.651767 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:20:42.665293 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:51226.service - OpenSSH per-connection server daemon (10.0.0.1:51226). Apr 21 10:20:42.672051 systemd-logind[1444]: Removed session 4. Apr 21 10:20:42.706861 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 51226 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:42.712657 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:42.717030 systemd-logind[1444]: New session 5 of user core. Apr 21 10:20:42.732133 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:20:42.797850 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:20:42.798179 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:20:42.820247 sudo[1600]: pam_unix(sudo:session): session closed for user root Apr 21 10:20:42.832206 sshd[1597]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:42.848139 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:51226.service: Deactivated successfully. Apr 21 10:20:42.850804 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:20:42.852136 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:20:42.865542 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:51240.service - OpenSSH per-connection server daemon (10.0.0.1:51240). Apr 21 10:20:42.869623 systemd-logind[1444]: Removed session 5. Apr 21 10:20:42.918557 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 51240 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:42.920402 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:42.927744 systemd-logind[1444]: New session 6 of user core. Apr 21 10:20:42.939602 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:20:43.000673 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:20:43.000958 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:20:43.014841 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 21 10:20:43.023171 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:20:43.023408 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:20:43.042525 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:20:43.046118 auditctl[1612]: No rules Apr 21 10:20:43.047157 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:20:43.047326 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:20:43.050248 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:20:43.090650 augenrules[1630]: No rules Apr 21 10:20:43.091365 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:20:43.092370 sudo[1608]: pam_unix(sudo:session): session closed for user root Apr 21 10:20:43.095750 sshd[1605]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:43.100955 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:51240.service: Deactivated successfully. Apr 21 10:20:43.105636 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:20:43.106799 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:20:43.117545 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:51244.service - OpenSSH per-connection server daemon (10.0.0.1:51244). Apr 21 10:20:43.118737 systemd-logind[1444]: Removed session 6. Apr 21 10:20:43.153887 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 51244 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:20:43.155699 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:43.159795 systemd-logind[1444]: New session 7 of user core. Apr 21 10:20:43.174173 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:20:43.228960 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:20:43.229157 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:20:44.128247 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:20:44.128351 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:20:45.313091 dockerd[1660]: time="2026-04-21T10:20:45.312830600Z" level=info msg="Starting up" Apr 21 10:20:45.694134 systemd[1]: var-lib-docker-metacopy\x2dcheck742261505-merged.mount: Deactivated successfully. Apr 21 10:20:45.720024 dockerd[1660]: time="2026-04-21T10:20:45.719927010Z" level=info msg="Loading containers: start." Apr 21 10:20:45.851972 kernel: Initializing XFRM netlink socket Apr 21 10:20:45.933276 systemd-networkd[1383]: docker0: Link UP Apr 21 10:20:45.958494 dockerd[1660]: time="2026-04-21T10:20:45.957598536Z" level=info msg="Loading containers: done." Apr 21 10:20:46.000474 dockerd[1660]: time="2026-04-21T10:20:46.000233271Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:20:46.000739 dockerd[1660]: time="2026-04-21T10:20:46.000616567Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:20:46.000839 dockerd[1660]: time="2026-04-21T10:20:46.000776626Z" level=info msg="Daemon has completed initialization" Apr 21 10:20:46.056051 dockerd[1660]: time="2026-04-21T10:20:46.055613195Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:20:46.056684 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:20:46.844882 containerd[1457]: time="2026-04-21T10:20:46.844723176Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 10:20:48.018514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980270824.mount: Deactivated successfully. Apr 21 10:20:49.710882 containerd[1457]: time="2026-04-21T10:20:49.710471202Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 21 10:20:49.710882 containerd[1457]: time="2026-04-21T10:20:49.710667126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:49.712599 containerd[1457]: time="2026-04-21T10:20:49.712535336Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:49.715662 containerd[1457]: time="2026-04-21T10:20:49.715618153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:49.716736 containerd[1457]: time="2026-04-21T10:20:49.716694153Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 2.871842858s" Apr 21 10:20:49.716736 containerd[1457]: time="2026-04-21T10:20:49.716729109Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 21 10:20:49.719753 containerd[1457]: time="2026-04-21T10:20:49.719709253Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 10:20:51.278019 containerd[1457]: time="2026-04-21T10:20:51.277728971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:51.280022 containerd[1457]: time="2026-04-21T10:20:51.279562125Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 21 10:20:51.280522 containerd[1457]: time="2026-04-21T10:20:51.280448469Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:51.286414 containerd[1457]: time="2026-04-21T10:20:51.285808928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:51.289683 containerd[1457]: time="2026-04-21T10:20:51.289643055Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.569887869s" Apr 21 10:20:51.289683 containerd[1457]: time="2026-04-21T10:20:51.289680590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 21 10:20:51.291519 containerd[1457]: time="2026-04-21T10:20:51.291253980Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 10:20:51.653663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:20:51.888533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:20:52.452517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:20:52.461204 (kubelet)[1879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:20:52.567984 kubelet[1879]: E0421 10:20:52.567659 1879 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:20:52.571273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:20:52.572134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:20:53.206846 containerd[1457]: time="2026-04-21T10:20:53.206620940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:53.208733 containerd[1457]: time="2026-04-21T10:20:53.207383708Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 21 10:20:53.211207 containerd[1457]: time="2026-04-21T10:20:53.210930881Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:53.216377 containerd[1457]: time="2026-04-21T10:20:53.216302673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:53.217542 containerd[1457]: time="2026-04-21T10:20:53.217453116Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.926164243s" Apr 21 10:20:53.217542 containerd[1457]: time="2026-04-21T10:20:53.217522973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 21 10:20:53.219071 containerd[1457]: time="2026-04-21T10:20:53.219044986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 10:20:54.559558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425498065.mount: Deactivated successfully. Apr 21 10:20:55.093237 containerd[1457]: time="2026-04-21T10:20:55.092994036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:55.094773 containerd[1457]: time="2026-04-21T10:20:55.094232093Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 21 10:20:55.096042 containerd[1457]: time="2026-04-21T10:20:55.096006518Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:55.098262 containerd[1457]: time="2026-04-21T10:20:55.098206282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:55.098760 containerd[1457]: time="2026-04-21T10:20:55.098729124Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.879654586s" Apr 21 10:20:55.098788 containerd[1457]: time="2026-04-21T10:20:55.098761336Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 21 10:20:55.100085 containerd[1457]: time="2026-04-21T10:20:55.100054510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 10:20:55.836930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202478585.mount: Deactivated successfully. Apr 21 10:20:57.353245 containerd[1457]: time="2026-04-21T10:20:57.352976067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:57.355185 containerd[1457]: time="2026-04-21T10:20:57.353296865Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 21 10:20:57.356552 containerd[1457]: time="2026-04-21T10:20:57.356493509Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:57.359365 containerd[1457]: time="2026-04-21T10:20:57.359323154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:57.360116 containerd[1457]: time="2026-04-21T10:20:57.360091253Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.260003038s" Apr 21 10:20:57.360153 containerd[1457]: time="2026-04-21T10:20:57.360124424Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 21 10:20:57.361638 containerd[1457]: time="2026-04-21T10:20:57.361437958Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:20:57.902979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388248160.mount: Deactivated successfully. Apr 21 10:20:57.913568 containerd[1457]: time="2026-04-21T10:20:57.913348583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:57.914228 containerd[1457]: time="2026-04-21T10:20:57.914127522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 21 10:20:57.915026 containerd[1457]: time="2026-04-21T10:20:57.914980982Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:57.919296 containerd[1457]: time="2026-04-21T10:20:57.917748652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:20:57.919296 containerd[1457]: time="2026-04-21T10:20:57.918661367Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 557.197854ms" Apr 21 10:20:57.919296 containerd[1457]: time="2026-04-21T10:20:57.918704039Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 10:20:57.923426 containerd[1457]: time="2026-04-21T10:20:57.923093973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 10:20:58.808991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount310977499.mount: Deactivated successfully. Apr 21 10:21:00.227004 containerd[1457]: time="2026-04-21T10:21:00.226410114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:00.228106 containerd[1457]: time="2026-04-21T10:21:00.227913320Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 21 10:21:00.229488 containerd[1457]: time="2026-04-21T10:21:00.229442041Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:00.234466 containerd[1457]: time="2026-04-21T10:21:00.234300515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:00.235189 containerd[1457]: time="2026-04-21T10:21:00.235152624Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.311766011s" Apr 21 10:21:00.235189 containerd[1457]: time="2026-04-21T10:21:00.235185754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 21 10:21:01.842868 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:21:01.854205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:21:01.876950 systemd[1]: Reloading requested from client PID 2051 ('systemctl') (unit session-7.scope)... Apr 21 10:21:01.876977 systemd[1]: Reloading... Apr 21 10:21:01.957943 zram_generator::config[2090]: No configuration found. Apr 21 10:21:02.046720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:21:02.096776 systemd[1]: Reloading finished in 219 ms. Apr 21 10:21:02.136400 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:21:02.138487 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:21:02.138662 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:21:02.139791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:21:02.272799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:21:02.276475 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:21:02.344786 kubelet[2140]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:21:02.459670 kubelet[2140]: I0421 10:21:02.459118 2140 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:21:02.459670 kubelet[2140]: I0421 10:21:02.459492 2140 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:21:02.459670 kubelet[2140]: I0421 10:21:02.459673 2140 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:21:02.459670 kubelet[2140]: I0421 10:21:02.459682 2140 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:21:02.460764 kubelet[2140]: I0421 10:21:02.460709 2140 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:21:02.487240 kubelet[2140]: I0421 10:21:02.487030 2140 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:21:02.487240 kubelet[2140]: E0421 10:21:02.487156 2140 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:21:02.494369 kubelet[2140]: E0421 10:21:02.493512 2140 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:21:02.495013 kubelet[2140]: I0421 10:21:02.494623 2140 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:21:02.500565 kubelet[2140]: I0421 10:21:02.500520 2140 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:21:02.501335 kubelet[2140]: I0421 10:21:02.501299 2140 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:21:02.501595 kubelet[2140]: I0421 10:21:02.501329 2140 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:21:02.501791 kubelet[2140]: I0421 10:21:02.501622 2140 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:21:02.501791 kubelet[2140]: I0421 10:21:02.501630 2140 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:21:02.501832 kubelet[2140]: I0421 10:21:02.501817 2140 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:21:02.504255 kubelet[2140]: I0421 10:21:02.504226 2140 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:21:02.504580 kubelet[2140]: I0421 10:21:02.504534 2140 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:21:02.504580 kubelet[2140]: I0421 10:21:02.504571 2140 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:21:02.504683 kubelet[2140]: I0421 10:21:02.504660 2140 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:21:02.504705 kubelet[2140]: I0421 10:21:02.504693 2140 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:21:02.511522 kubelet[2140]: I0421 10:21:02.510569 2140 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:21:02.513611 kubelet[2140]: I0421 10:21:02.513206 2140 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:21:02.513611 kubelet[2140]: I0421 10:21:02.513243 2140 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:21:02.513611 kubelet[2140]: W0421 10:21:02.513316 2140 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:21:02.515976 kubelet[2140]: I0421 10:21:02.515959 2140 server.go:1257] "Started kubelet" Apr 21 10:21:02.685072 kubelet[2140]: I0421 10:21:02.684727 2140 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:21:02.687803 kubelet[2140]: I0421 10:21:02.687763 2140 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:21:02.688118 kubelet[2140]: I0421 10:21:02.688081 2140 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:21:02.689351 kubelet[2140]: I0421 10:21:02.689011 2140 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:21:02.690987 kubelet[2140]: I0421 10:21:02.690936 2140 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:21:02.691040 kubelet[2140]: I0421 10:21:02.691001 2140 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:21:02.691292 kubelet[2140]: I0421 10:21:02.691262 2140 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:21:02.691418 kubelet[2140]: E0421 10:21:02.691404 2140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:21:02.691476 kubelet[2140]: I0421 10:21:02.691448 2140 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:21:02.692271 kubelet[2140]: I0421 10:21:02.691651 2140 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:21:02.692271 kubelet[2140]: I0421 10:21:02.691711 2140 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:21:02.692271 kubelet[2140]: E0421 10:21:02.692168 2140 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="200ms" Apr 21 10:21:02.693062 kubelet[2140]: E0421 10:21:02.691319 2140 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8580c38f1f690 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 10:21:02.515893904 +0000 UTC m=+0.233894362,LastTimestamp:2026-04-21 10:21:02.515893904 +0000 UTC m=+0.233894362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 10:21:02.693599 kubelet[2140]: I0421 10:21:02.693566 2140 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:21:02.694111 kubelet[2140]: I0421 10:21:02.693732 2140 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:21:02.695102 kubelet[2140]: E0421 10:21:02.695067 2140 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:21:02.695270 kubelet[2140]: I0421 10:21:02.695258 2140 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:21:02.709733 kubelet[2140]: I0421 10:21:02.709606 2140 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:21:02.711317 kubelet[2140]: I0421 10:21:02.711286 2140 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:21:02.711371 kubelet[2140]: I0421 10:21:02.711347 2140 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:21:02.711389 kubelet[2140]: I0421 10:21:02.711377 2140 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:21:02.711499 kubelet[2140]: E0421 10:21:02.711463 2140 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:21:02.713241 kubelet[2140]: I0421 10:21:02.713219 2140 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:21:02.713241 kubelet[2140]: I0421 10:21:02.713234 2140 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:21:02.713316 kubelet[2140]: I0421 10:21:02.713253 2140 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:21:02.715983 kubelet[2140]: I0421 10:21:02.715966 2140 policy_none.go:50] "Start" Apr 21 10:21:02.716035 kubelet[2140]: I0421 10:21:02.715997 2140 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:21:02.716035 kubelet[2140]: I0421 10:21:02.716013 2140 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:21:02.718338 kubelet[2140]: I0421 10:21:02.718312 2140 policy_none.go:44] "Start" Apr 21 10:21:02.722279 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:21:02.735663 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:21:02.749317 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:21:02.757869 kubelet[2140]: E0421 10:21:02.757781 2140 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:21:02.758194 kubelet[2140]: I0421 10:21:02.758096 2140 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:21:02.758194 kubelet[2140]: I0421 10:21:02.758122 2140 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:21:02.758415 kubelet[2140]: I0421 10:21:02.758381 2140 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:21:02.760717 kubelet[2140]: E0421 10:21:02.760692 2140 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:21:02.760802 kubelet[2140]: E0421 10:21:02.760777 2140 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 10:21:02.830970 systemd[1]: Created slice kubepods-burstable-podd65de6e7a6cbbcddf18a4a3b78dfd01a.slice - libcontainer container kubepods-burstable-podd65de6e7a6cbbcddf18a4a3b78dfd01a.slice. Apr 21 10:21:02.856159 kubelet[2140]: E0421 10:21:02.856111 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:21:02.858626 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 21 10:21:02.859172 kubelet[2140]: I0421 10:21:02.859034 2140 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:21:02.859466 kubelet[2140]: E0421 10:21:02.859439 2140 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Apr 21 10:21:02.865963 kubelet[2140]: E0421 10:21:02.865897 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:21:02.867897 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 21 10:21:02.869019 kubelet[2140]: E0421 10:21:02.868989 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:21:02.892738 kubelet[2140]: E0421 10:21:02.892681 2140 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="400ms" Apr 21 10:21:02.893871 kubelet[2140]: I0421 10:21:02.893831 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d65de6e7a6cbbcddf18a4a3b78dfd01a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d65de6e7a6cbbcddf18a4a3b78dfd01a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:02.893871 kubelet[2140]: I0421 10:21:02.893868 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d65de6e7a6cbbcddf18a4a3b78dfd01a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d65de6e7a6cbbcddf18a4a3b78dfd01a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:02.893871 kubelet[2140]: I0421 10:21:02.893888 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:02.894043 kubelet[2140]: I0421 10:21:02.893935 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:02.894043 kubelet[2140]: I0421 10:21:02.893950 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:02.894043 kubelet[2140]: I0421 10:21:02.893965 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:21:02.894043 kubelet[2140]: I0421 10:21:02.894030 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d65de6e7a6cbbcddf18a4a3b78dfd01a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d65de6e7a6cbbcddf18a4a3b78dfd01a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:02.894146 kubelet[2140]: I0421 10:21:02.894054 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:02.894146 kubelet[2140]: I0421 10:21:02.894068 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:03.068383 kubelet[2140]: I0421 10:21:03.067937 2140 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:21:03.068682 kubelet[2140]: E0421 10:21:03.068451 2140 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Apr 21 10:21:03.168797 kubelet[2140]: E0421 10:21:03.168535 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:03.169989 kubelet[2140]: E0421 10:21:03.169647 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:03.171561 containerd[1457]: time="2026-04-21T10:21:03.171507858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d65de6e7a6cbbcddf18a4a3b78dfd01a,Namespace:kube-system,Attempt:0,}" Apr 21 10:21:03.172058 containerd[1457]: time="2026-04-21T10:21:03.171510366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 21 10:21:03.172085 kubelet[2140]: E0421 10:21:03.171667 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:03.172132 containerd[1457]: time="2026-04-21T10:21:03.172103434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 21 10:21:03.296380 kubelet[2140]: E0421 10:21:03.296147 2140 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="800ms" Apr 21 10:21:03.477195 kubelet[2140]: I0421 10:21:03.477153 2140 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:21:03.477706 kubelet[2140]: E0421 10:21:03.477570 2140 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Apr 21 10:21:03.640330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254125231.mount: Deactivated successfully. Apr 21 10:21:03.653090 containerd[1457]: time="2026-04-21T10:21:03.652803071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:21:03.655944 containerd[1457]: time="2026-04-21T10:21:03.655784655Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:21:03.657538 containerd[1457]: time="2026-04-21T10:21:03.657314019Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:21:03.658951 containerd[1457]: time="2026-04-21T10:21:03.658896560Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:21:03.661017 containerd[1457]: time="2026-04-21T10:21:03.660647836Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:21:03.661017 containerd[1457]: time="2026-04-21T10:21:03.660699890Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:21:03.664190 containerd[1457]: time="2026-04-21T10:21:03.663703146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 21 10:21:03.668760 containerd[1457]: time="2026-04-21T10:21:03.668610177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:21:03.669645 containerd[1457]: time="2026-04-21T10:21:03.669611817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.462946ms" Apr 21 10:21:03.670154 containerd[1457]: time="2026-04-21T10:21:03.670123332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.421091ms" Apr 21 10:21:03.673298 containerd[1457]: time="2026-04-21T10:21:03.673256746Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.586179ms" Apr 21 10:21:03.996220 containerd[1457]: time="2026-04-21T10:21:03.994435409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:03.996220 containerd[1457]: time="2026-04-21T10:21:03.995707013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:03.996220 containerd[1457]: time="2026-04-21T10:21:03.995719685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:03.996220 containerd[1457]: time="2026-04-21T10:21:03.995836230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:03.997271 containerd[1457]: time="2026-04-21T10:21:03.996343025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:03.997271 containerd[1457]: time="2026-04-21T10:21:03.996401872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:03.997271 containerd[1457]: time="2026-04-21T10:21:03.996410804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:03.997271 containerd[1457]: time="2026-04-21T10:21:03.996521865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:04.001707 containerd[1457]: time="2026-04-21T10:21:04.001564803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:04.001707 containerd[1457]: time="2026-04-21T10:21:04.001627298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:04.001791 containerd[1457]: time="2026-04-21T10:21:04.001732797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:04.001829 containerd[1457]: time="2026-04-21T10:21:04.001802898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:04.044089 systemd[1]: Started cri-containerd-6cd9fae57eee06ec6731f765aaede088e4f3e9a453626f693b2efd08113c9333.scope - libcontainer container 6cd9fae57eee06ec6731f765aaede088e4f3e9a453626f693b2efd08113c9333. Apr 21 10:21:04.048112 systemd[1]: Started cri-containerd-6bd16e069971224e2ed00d96cb182a52d7553d3a076023746cbf09d678874d61.scope - libcontainer container 6bd16e069971224e2ed00d96cb182a52d7553d3a076023746cbf09d678874d61. Apr 21 10:21:04.048958 systemd[1]: Started cri-containerd-77896aee0b492f00b6f36580c43ee217448a123ad930c42cdd0c93e05070b5ae.scope - libcontainer container 77896aee0b492f00b6f36580c43ee217448a123ad930c42cdd0c93e05070b5ae. Apr 21 10:21:04.119821 kubelet[2140]: E0421 10:21:04.119606 2140 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="1.6s" Apr 21 10:21:04.155532 containerd[1457]: time="2026-04-21T10:21:04.154524937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bd16e069971224e2ed00d96cb182a52d7553d3a076023746cbf09d678874d61\"" Apr 21 10:21:04.165223 containerd[1457]: time="2026-04-21T10:21:04.165013932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cd9fae57eee06ec6731f765aaede088e4f3e9a453626f693b2efd08113c9333\"" Apr 21 10:21:04.174014 kubelet[2140]: E0421 10:21:04.173259 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:04.176080 kubelet[2140]: E0421 10:21:04.175659 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:04.177099 containerd[1457]: time="2026-04-21T10:21:04.177054639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d65de6e7a6cbbcddf18a4a3b78dfd01a,Namespace:kube-system,Attempt:0,} returns sandbox id \"77896aee0b492f00b6f36580c43ee217448a123ad930c42cdd0c93e05070b5ae\"" Apr 21 10:21:04.177810 kubelet[2140]: E0421 10:21:04.177772 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:04.186574 containerd[1457]: time="2026-04-21T10:21:04.186385191Z" level=info msg="CreateContainer within sandbox \"6bd16e069971224e2ed00d96cb182a52d7553d3a076023746cbf09d678874d61\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:21:04.188196 containerd[1457]: time="2026-04-21T10:21:04.188138537Z" level=info msg="CreateContainer within sandbox \"77896aee0b492f00b6f36580c43ee217448a123ad930c42cdd0c93e05070b5ae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:21:04.193281 containerd[1457]: time="2026-04-21T10:21:04.193102542Z" level=info msg="CreateContainer within sandbox \"6cd9fae57eee06ec6731f765aaede088e4f3e9a453626f693b2efd08113c9333\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:21:04.210167 kubelet[2140]: E0421 10:21:04.209874 2140 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8580c38f1f690 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 10:21:02.515893904 +0000 UTC m=+0.233894362,LastTimestamp:2026-04-21 10:21:02.515893904 +0000 UTC m=+0.233894362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 10:21:04.220698 containerd[1457]: time="2026-04-21T10:21:04.220389701Z" level=info msg="CreateContainer within sandbox \"6cd9fae57eee06ec6731f765aaede088e4f3e9a453626f693b2efd08113c9333\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"869e7332ac84e92e30dac824086864b8d7bf52613245150706aa805c6878a9b6\"" Apr 21 10:21:04.221619 containerd[1457]: time="2026-04-21T10:21:04.221589712Z" level=info msg="CreateContainer within sandbox \"6bd16e069971224e2ed00d96cb182a52d7553d3a076023746cbf09d678874d61\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"db457ac8035c72c053f85c53cb4427c42e1bbbd5119e9b7fde20996c5e6f7d97\"" Apr 21 10:21:04.223759 containerd[1457]: time="2026-04-21T10:21:04.223715531Z" level=info msg="CreateContainer within sandbox \"77896aee0b492f00b6f36580c43ee217448a123ad930c42cdd0c93e05070b5ae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2cef06a405a3b5268a5bb5e36381ddf2029f244e62bce9a9f2b2e6042cd2f7f9\"" Apr 21 10:21:04.224229 containerd[1457]: time="2026-04-21T10:21:04.224184020Z" level=info msg="StartContainer for \"869e7332ac84e92e30dac824086864b8d7bf52613245150706aa805c6878a9b6\"" Apr 21 10:21:04.224376 containerd[1457]: time="2026-04-21T10:21:04.224356047Z" level=info msg="StartContainer for \"2cef06a405a3b5268a5bb5e36381ddf2029f244e62bce9a9f2b2e6042cd2f7f9\"" Apr 21 10:21:04.225738 containerd[1457]: time="2026-04-21T10:21:04.224989462Z" level=info msg="StartContainer for \"db457ac8035c72c053f85c53cb4427c42e1bbbd5119e9b7fde20996c5e6f7d97\"" Apr 21 10:21:04.268432 systemd[1]: Started cri-containerd-2cef06a405a3b5268a5bb5e36381ddf2029f244e62bce9a9f2b2e6042cd2f7f9.scope - libcontainer container 2cef06a405a3b5268a5bb5e36381ddf2029f244e62bce9a9f2b2e6042cd2f7f9. Apr 21 10:21:04.272158 systemd[1]: Started cri-containerd-869e7332ac84e92e30dac824086864b8d7bf52613245150706aa805c6878a9b6.scope - libcontainer container 869e7332ac84e92e30dac824086864b8d7bf52613245150706aa805c6878a9b6. Apr 21 10:21:04.274532 systemd[1]: Started cri-containerd-db457ac8035c72c053f85c53cb4427c42e1bbbd5119e9b7fde20996c5e6f7d97.scope - libcontainer container db457ac8035c72c053f85c53cb4427c42e1bbbd5119e9b7fde20996c5e6f7d97. Apr 21 10:21:04.286038 kubelet[2140]: I0421 10:21:04.285890 2140 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:21:04.286824 kubelet[2140]: E0421 10:21:04.286771 2140 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Apr 21 10:21:04.339024 containerd[1457]: time="2026-04-21T10:21:04.338434509Z" level=info msg="StartContainer for \"869e7332ac84e92e30dac824086864b8d7bf52613245150706aa805c6878a9b6\" returns successfully" Apr 21 10:21:04.339222 containerd[1457]: time="2026-04-21T10:21:04.339183620Z" level=info msg="StartContainer for \"2cef06a405a3b5268a5bb5e36381ddf2029f244e62bce9a9f2b2e6042cd2f7f9\" returns successfully" Apr 21 10:21:04.360724 containerd[1457]: time="2026-04-21T10:21:04.360527315Z" level=info msg="StartContainer for \"db457ac8035c72c053f85c53cb4427c42e1bbbd5119e9b7fde20996c5e6f7d97\" returns successfully" Apr 21 10:21:04.733607 kubelet[2140]: E0421 10:21:04.733373 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:21:04.734137 kubelet[2140]: E0421 10:21:04.733668 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:04.735694 kubelet[2140]: E0421 10:21:04.735665 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:21:04.735870 kubelet[2140]: E0421 10:21:04.735771 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:04.743781 kubelet[2140]: E0421 10:21:04.743607 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:21:04.744072 kubelet[2140]: E0421 10:21:04.743928 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:05.741830 kubelet[2140]: E0421 10:21:05.741639 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:21:05.742592 kubelet[2140]: E0421 10:21:05.741970 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:05.743357 kubelet[2140]: E0421 10:21:05.743336 2140 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:21:05.743677 kubelet[2140]: E0421 10:21:05.743453 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:05.900381 kubelet[2140]: I0421 10:21:05.897442 2140 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:21:06.290280 kubelet[2140]: E0421 10:21:06.290103 2140 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 10:21:06.407143 kubelet[2140]: I0421 10:21:06.405023 2140 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 10:21:06.407143 kubelet[2140]: E0421 10:21:06.405228 2140 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 21 10:21:06.522610 kubelet[2140]: I0421 10:21:06.492162 2140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:06.522610 kubelet[2140]: I0421 10:21:06.518278 2140 apiserver.go:52] "Watching apiserver" Apr 21 10:21:06.735062 kubelet[2140]: E0421 10:21:06.734675 2140 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:06.735062 kubelet[2140]: I0421 10:21:06.734718 2140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:06.739856 kubelet[2140]: E0421 10:21:06.739807 2140 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:06.739982 kubelet[2140]: I0421 10:21:06.739870 2140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:21:06.742394 kubelet[2140]: E0421 10:21:06.742350 2140 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 21 10:21:06.794110 kubelet[2140]: I0421 10:21:06.793767 2140 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:21:07.040844 kubelet[2140]: I0421 10:21:07.040453 2140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:21:07.042973 kubelet[2140]: E0421 10:21:07.042934 2140 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 21 10:21:07.043239 kubelet[2140]: E0421 10:21:07.043202 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:07.163778 kubelet[2140]: I0421 10:21:07.163546 2140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:07.165741 kubelet[2140]: E0421 10:21:07.165681 2140 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:07.166185 kubelet[2140]: E0421 10:21:07.166070 2140 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:08.745307 systemd[1]: Reloading requested from client PID 2424 ('systemctl') (unit session-7.scope)... Apr 21 10:21:08.745328 systemd[1]: Reloading... Apr 21 10:21:08.808438 zram_generator::config[2463]: No configuration found. Apr 21 10:21:08.889996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:21:08.951328 systemd[1]: Reloading finished in 205 ms. Apr 21 10:21:08.978364 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:21:09.002201 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:21:09.002424 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:21:09.002484 systemd[1]: kubelet.service: Consumed 1.761s CPU time, 129.8M memory peak, 0B memory swap peak. Apr 21 10:21:09.012132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:21:09.171498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:21:09.175537 (kubelet)[2508]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:21:09.225355 kubelet[2508]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:21:09.238191 kubelet[2508]: I0421 10:21:09.237950 2508 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:21:09.238191 kubelet[2508]: I0421 10:21:09.238047 2508 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:21:09.238191 kubelet[2508]: I0421 10:21:09.238098 2508 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:21:09.238191 kubelet[2508]: I0421 10:21:09.238103 2508 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:21:09.238884 kubelet[2508]: I0421 10:21:09.238536 2508 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:21:09.240000 kubelet[2508]: I0421 10:21:09.239981 2508 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:21:09.241885 kubelet[2508]: I0421 10:21:09.241849 2508 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:21:09.250158 kubelet[2508]: E0421 10:21:09.249189 2508 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:21:09.250158 kubelet[2508]: I0421 10:21:09.249396 2508 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:21:09.260691 kubelet[2508]: I0421 10:21:09.260228 2508 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:21:09.260691 kubelet[2508]: I0421 10:21:09.260516 2508 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:21:09.261014 kubelet[2508]: I0421 10:21:09.260567 2508 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:21:09.261014 kubelet[2508]: I0421 10:21:09.260897 2508 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:21:09.261014 kubelet[2508]: I0421 10:21:09.260948 2508 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:21:09.261014 kubelet[2508]: I0421 10:21:09.260977 2508 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:21:09.261428 kubelet[2508]: I0421 10:21:09.261406 2508 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:21:09.261697 kubelet[2508]: I0421 10:21:09.261674 2508 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:21:09.261751 kubelet[2508]: I0421 10:21:09.261713 2508 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:21:09.261751 kubelet[2508]: I0421 10:21:09.261735 2508 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:21:09.261751 kubelet[2508]: I0421 10:21:09.261746 2508 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:21:09.269176 kubelet[2508]: I0421 10:21:09.269014 2508 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:21:09.269796 kubelet[2508]: I0421 10:21:09.269766 2508 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:21:09.269841 kubelet[2508]: I0421 10:21:09.269816 2508 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:21:09.277650 kubelet[2508]: I0421 10:21:09.277475 2508 server.go:1257] "Started kubelet" Apr 21 10:21:09.278162 kubelet[2508]: I0421 10:21:09.277765 2508 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:21:09.278637 kubelet[2508]: I0421 10:21:09.278555 2508 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:21:09.278668 kubelet[2508]: I0421 10:21:09.278639 2508 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:21:09.278897 kubelet[2508]: I0421 10:21:09.278881 2508 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:21:09.279029 kubelet[2508]: I0421 10:21:09.279013 2508 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:21:09.280217 kubelet[2508]: I0421 10:21:09.279672 2508 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:21:09.280217 kubelet[2508]: I0421 10:21:09.279971 2508 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:21:09.281726 kubelet[2508]: I0421 10:21:09.281161 2508 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:21:09.281726 kubelet[2508]: I0421 10:21:09.281228 2508 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:21:09.281726 kubelet[2508]: I0421 10:21:09.281392 2508 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:21:09.286498 kubelet[2508]: I0421 10:21:09.286361 2508 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:21:09.287212 kubelet[2508]: I0421 10:21:09.286695 2508 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:21:09.288580 kubelet[2508]: I0421 10:21:09.288560 2508 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:21:09.299003 kubelet[2508]: I0421 10:21:09.298768 2508 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:21:09.313203 kubelet[2508]: I0421 10:21:09.312831 2508 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:21:09.313203 kubelet[2508]: I0421 10:21:09.312882 2508 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:21:09.313203 kubelet[2508]: I0421 10:21:09.313041 2508 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:21:09.316347 kubelet[2508]: E0421 10:21:09.313199 2508 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:21:09.333830 kubelet[2508]: I0421 10:21:09.333686 2508 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:21:09.334961 kubelet[2508]: I0421 10:21:09.334383 2508 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:21:09.334961 kubelet[2508]: I0421 10:21:09.334404 2508 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:21:09.334961 kubelet[2508]: I0421 10:21:09.334495 2508 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 10:21:09.334961 kubelet[2508]: I0421 10:21:09.334511 2508 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 10:21:09.334961 kubelet[2508]: I0421 10:21:09.334526 2508 policy_none.go:50] "Start" Apr 21 10:21:09.334961 kubelet[2508]: I0421 10:21:09.334534 2508 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:21:09.334961 kubelet[2508]: I0421 10:21:09.334541 2508 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:21:09.334961 kubelet[2508]: I0421 10:21:09.334747 2508 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:21:09.334961 kubelet[2508]: I0421 10:21:09.334755 2508 policy_none.go:44] "Start" Apr 21 10:21:09.338661 kubelet[2508]: E0421 10:21:09.338626 2508 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:21:09.338967 kubelet[2508]: I0421 10:21:09.338753 2508 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:21:09.338967 kubelet[2508]: I0421 10:21:09.338765 2508 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:21:09.338967 kubelet[2508]: I0421 10:21:09.338929 2508 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:21:09.340329 kubelet[2508]: E0421 10:21:09.340307 2508 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:21:09.445159 kubelet[2508]: I0421 10:21:09.444419 2508 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:09.445159 kubelet[2508]: I0421 10:21:09.444404 2508 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:21:09.445159 kubelet[2508]: I0421 10:21:09.444501 2508 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:09.452066 kubelet[2508]: I0421 10:21:09.450499 2508 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 10:21:09.471117 kubelet[2508]: I0421 10:21:09.470870 2508 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 21 10:21:09.471117 kubelet[2508]: I0421 10:21:09.471116 2508 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 10:21:09.484940 kubelet[2508]: I0421 10:21:09.483811 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:09.484940 kubelet[2508]: I0421 10:21:09.484155 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:21:09.484940 kubelet[2508]: I0421 10:21:09.484172 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d65de6e7a6cbbcddf18a4a3b78dfd01a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d65de6e7a6cbbcddf18a4a3b78dfd01a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:09.484940 kubelet[2508]: I0421 10:21:09.484187 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d65de6e7a6cbbcddf18a4a3b78dfd01a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d65de6e7a6cbbcddf18a4a3b78dfd01a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:09.484940 kubelet[2508]: I0421 10:21:09.484202 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:09.485960 kubelet[2508]: I0421 10:21:09.484214 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:09.485960 kubelet[2508]: I0421 10:21:09.484226 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d65de6e7a6cbbcddf18a4a3b78dfd01a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d65de6e7a6cbbcddf18a4a3b78dfd01a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:09.485960 kubelet[2508]: I0421 10:21:09.484520 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:09.485960 kubelet[2508]: I0421 10:21:09.484534 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:21:09.768978 kubelet[2508]: E0421 10:21:09.768796 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:09.768978 kubelet[2508]: E0421 10:21:09.768790 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:09.769623 kubelet[2508]: E0421 10:21:09.768812 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:10.265021 kubelet[2508]: I0421 10:21:10.264792 2508 apiserver.go:52] "Watching apiserver" Apr 21 10:21:10.282006 kubelet[2508]: I0421 10:21:10.281973 2508 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:21:10.332711 kubelet[2508]: I0421 10:21:10.332529 2508 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:21:10.333938 kubelet[2508]: I0421 10:21:10.333553 2508 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:10.334188 kubelet[2508]: E0421 10:21:10.334120 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:10.497822 kubelet[2508]: E0421 10:21:10.497383 2508 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:21:10.499423 kubelet[2508]: E0421 10:21:10.498664 2508 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 10:21:10.499423 kubelet[2508]: E0421 10:21:10.499197 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:10.499423 kubelet[2508]: E0421 10:21:10.499348 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:11.334740 kubelet[2508]: E0421 10:21:11.334474 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:11.335647 kubelet[2508]: E0421 10:21:11.335157 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:11.379876 kubelet[2508]: I0421 10:21:11.379507 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.379473337 podStartE2EDuration="2.379473337s" podCreationTimestamp="2026-04-21 10:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:21:11.379308489 +0000 UTC m=+2.199656959" watchObservedRunningTime="2026-04-21 10:21:11.379473337 +0000 UTC m=+2.199821797" Apr 21 10:21:11.656663 kubelet[2508]: I0421 10:21:11.656466 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.656453658 podStartE2EDuration="2.656453658s" podCreationTimestamp="2026-04-21 10:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:21:11.649647948 +0000 UTC m=+2.469996419" watchObservedRunningTime="2026-04-21 10:21:11.656453658 +0000 UTC m=+2.476802118" Apr 21 10:21:11.656663 kubelet[2508]: I0421 10:21:11.656571 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.656567161 podStartE2EDuration="2.656567161s" podCreationTimestamp="2026-04-21 10:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:21:11.656406205 +0000 UTC m=+2.476754675" watchObservedRunningTime="2026-04-21 10:21:11.656567161 +0000 UTC m=+2.476915631" Apr 21 10:21:12.338878 kubelet[2508]: E0421 10:21:12.338678 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:13.340027 kubelet[2508]: E0421 10:21:13.339788 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:14.782557 kubelet[2508]: E0421 10:21:14.782287 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:14.812516 kubelet[2508]: E0421 10:21:14.812306 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:15.486689 kubelet[2508]: I0421 10:21:15.486412 2508 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:21:15.487403 containerd[1457]: time="2026-04-21T10:21:15.487159553Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:21:15.488066 kubelet[2508]: I0421 10:21:15.487472 2508 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:21:16.304447 systemd[1]: Created slice kubepods-besteffort-pod0a7f8e54_1b38_4617_b26c_f50835a4a606.slice - libcontainer container kubepods-besteffort-pod0a7f8e54_1b38_4617_b26c_f50835a4a606.slice. Apr 21 10:21:16.457847 kubelet[2508]: I0421 10:21:16.457521 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a7f8e54-1b38-4617-b26c-f50835a4a606-kube-proxy\") pod \"kube-proxy-zn2qp\" (UID: \"0a7f8e54-1b38-4617-b26c-f50835a4a606\") " pod="kube-system/kube-proxy-zn2qp" Apr 21 10:21:16.457847 kubelet[2508]: I0421 10:21:16.457763 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a7f8e54-1b38-4617-b26c-f50835a4a606-xtables-lock\") pod \"kube-proxy-zn2qp\" (UID: \"0a7f8e54-1b38-4617-b26c-f50835a4a606\") " pod="kube-system/kube-proxy-zn2qp" Apr 21 10:21:16.457847 kubelet[2508]: I0421 10:21:16.457782 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n82pl\" (UniqueName: \"kubernetes.io/projected/0a7f8e54-1b38-4617-b26c-f50835a4a606-kube-api-access-n82pl\") pod \"kube-proxy-zn2qp\" (UID: \"0a7f8e54-1b38-4617-b26c-f50835a4a606\") " pod="kube-system/kube-proxy-zn2qp" Apr 21 10:21:16.457847 kubelet[2508]: I0421 10:21:16.457802 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a7f8e54-1b38-4617-b26c-f50835a4a606-lib-modules\") pod \"kube-proxy-zn2qp\" (UID: \"0a7f8e54-1b38-4617-b26c-f50835a4a606\") " pod="kube-system/kube-proxy-zn2qp" Apr 21 10:21:16.628652 kubelet[2508]: E0421 10:21:16.628178 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:16.630434 containerd[1457]: time="2026-04-21T10:21:16.630393679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zn2qp,Uid:0a7f8e54-1b38-4617-b26c-f50835a4a606,Namespace:kube-system,Attempt:0,}" Apr 21 10:21:16.692869 containerd[1457]: time="2026-04-21T10:21:16.691980327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:16.696991 containerd[1457]: time="2026-04-21T10:21:16.694340324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:16.696991 containerd[1457]: time="2026-04-21T10:21:16.694363297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:16.696991 containerd[1457]: time="2026-04-21T10:21:16.694580391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:16.748823 systemd[1]: Started cri-containerd-94cc72f734f052e8959a1fb20802b8f369c0a96d371a8129f514f64112e3a8de.scope - libcontainer container 94cc72f734f052e8959a1fb20802b8f369c0a96d371a8129f514f64112e3a8de. Apr 21 10:21:16.756140 systemd[1]: Created slice kubepods-besteffort-pod0a70911d_0606_417f_b92d_4352f3264c4b.slice - libcontainer container kubepods-besteffort-pod0a70911d_0606_417f_b92d_4352f3264c4b.slice. Apr 21 10:21:16.813216 containerd[1457]: time="2026-04-21T10:21:16.813037383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zn2qp,Uid:0a7f8e54-1b38-4617-b26c-f50835a4a606,Namespace:kube-system,Attempt:0,} returns sandbox id \"94cc72f734f052e8959a1fb20802b8f369c0a96d371a8129f514f64112e3a8de\"" Apr 21 10:21:16.814404 kubelet[2508]: E0421 10:21:16.814362 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:16.824868 containerd[1457]: time="2026-04-21T10:21:16.824531757Z" level=info msg="CreateContainer within sandbox \"94cc72f734f052e8959a1fb20802b8f369c0a96d371a8129f514f64112e3a8de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:21:16.853443 containerd[1457]: time="2026-04-21T10:21:16.853319191Z" level=info msg="CreateContainer within sandbox \"94cc72f734f052e8959a1fb20802b8f369c0a96d371a8129f514f64112e3a8de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ca4c1e7d3977505cedeed73414dca7aad3d5c8e13f5ea500de39c94f604e39bf\"" Apr 21 10:21:16.854336 containerd[1457]: time="2026-04-21T10:21:16.854313626Z" level=info msg="StartContainer for \"ca4c1e7d3977505cedeed73414dca7aad3d5c8e13f5ea500de39c94f604e39bf\"" Apr 21 10:21:16.867419 kubelet[2508]: I0421 10:21:16.866476 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlf8q\" (UniqueName: \"kubernetes.io/projected/0a70911d-0606-417f-b92d-4352f3264c4b-kube-api-access-xlf8q\") pod \"tigera-operator-6cf4cccc57-gvhb4\" (UID: \"0a70911d-0606-417f-b92d-4352f3264c4b\") " pod="tigera-operator/tigera-operator-6cf4cccc57-gvhb4" Apr 21 10:21:16.867419 kubelet[2508]: I0421 10:21:16.866815 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a70911d-0606-417f-b92d-4352f3264c4b-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-gvhb4\" (UID: \"0a70911d-0606-417f-b92d-4352f3264c4b\") " pod="tigera-operator/tigera-operator-6cf4cccc57-gvhb4" Apr 21 10:21:16.897165 systemd[1]: Started cri-containerd-ca4c1e7d3977505cedeed73414dca7aad3d5c8e13f5ea500de39c94f604e39bf.scope - libcontainer container ca4c1e7d3977505cedeed73414dca7aad3d5c8e13f5ea500de39c94f604e39bf. Apr 21 10:21:16.961443 containerd[1457]: time="2026-04-21T10:21:16.961222432Z" level=info msg="StartContainer for \"ca4c1e7d3977505cedeed73414dca7aad3d5c8e13f5ea500de39c94f604e39bf\" returns successfully" Apr 21 10:21:17.069586 containerd[1457]: time="2026-04-21T10:21:17.069060002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-gvhb4,Uid:0a70911d-0606-417f-b92d-4352f3264c4b,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:21:17.144890 containerd[1457]: time="2026-04-21T10:21:17.142383235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:17.144890 containerd[1457]: time="2026-04-21T10:21:17.144715096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:17.144890 containerd[1457]: time="2026-04-21T10:21:17.144726149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:17.144890 containerd[1457]: time="2026-04-21T10:21:17.144796245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:17.170218 systemd[1]: Started cri-containerd-fbf9475158a0d4b9b671795417a4591c0f6effab7d625a2d1d40e0ffa826d51a.scope - libcontainer container fbf9475158a0d4b9b671795417a4591c0f6effab7d625a2d1d40e0ffa826d51a. Apr 21 10:21:17.214154 containerd[1457]: time="2026-04-21T10:21:17.214023062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-gvhb4,Uid:0a70911d-0606-417f-b92d-4352f3264c4b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fbf9475158a0d4b9b671795417a4591c0f6effab7d625a2d1d40e0ffa826d51a\"" Apr 21 10:21:17.216118 containerd[1457]: time="2026-04-21T10:21:17.216026346Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:21:17.361404 kubelet[2508]: E0421 10:21:17.361084 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:17.386286 kubelet[2508]: I0421 10:21:17.386003 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-zn2qp" podStartSLOduration=1.385987534 podStartE2EDuration="1.385987534s" podCreationTimestamp="2026-04-21 10:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:21:17.385979746 +0000 UTC m=+8.206328216" watchObservedRunningTime="2026-04-21 10:21:17.385987534 +0000 UTC m=+8.206336003" Apr 21 10:21:19.056289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330928505.mount: Deactivated successfully. Apr 21 10:21:19.965750 containerd[1457]: time="2026-04-21T10:21:19.965536310Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:19.966635 containerd[1457]: time="2026-04-21T10:21:19.966563614Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:21:19.967486 containerd[1457]: time="2026-04-21T10:21:19.967428530Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:19.969803 containerd[1457]: time="2026-04-21T10:21:19.969774471Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:19.970483 containerd[1457]: time="2026-04-21T10:21:19.970454850Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.754403286s" Apr 21 10:21:19.970483 containerd[1457]: time="2026-04-21T10:21:19.970484572Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:21:19.976942 containerd[1457]: time="2026-04-21T10:21:19.976724360Z" level=info msg="CreateContainer within sandbox \"fbf9475158a0d4b9b671795417a4591c0f6effab7d625a2d1d40e0ffa826d51a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:21:19.996128 containerd[1457]: time="2026-04-21T10:21:19.995975538Z" level=info msg="CreateContainer within sandbox \"fbf9475158a0d4b9b671795417a4591c0f6effab7d625a2d1d40e0ffa826d51a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c5152c0c3b0b57f4b4ff13ff5d00677c68466e018bc38355df00f232782868ad\"" Apr 21 10:21:19.997461 containerd[1457]: time="2026-04-21T10:21:19.997433799Z" level=info msg="StartContainer for \"c5152c0c3b0b57f4b4ff13ff5d00677c68466e018bc38355df00f232782868ad\"" Apr 21 10:21:20.327101 systemd[1]: Started cri-containerd-c5152c0c3b0b57f4b4ff13ff5d00677c68466e018bc38355df00f232782868ad.scope - libcontainer container c5152c0c3b0b57f4b4ff13ff5d00677c68466e018bc38355df00f232782868ad. Apr 21 10:21:20.403011 containerd[1457]: time="2026-04-21T10:21:20.402835287Z" level=info msg="StartContainer for \"c5152c0c3b0b57f4b4ff13ff5d00677c68466e018bc38355df00f232782868ad\" returns successfully" Apr 21 10:21:20.613478 kubelet[2508]: I0421 10:21:20.612232 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-gvhb4" podStartSLOduration=1.854900585 podStartE2EDuration="4.612159931s" podCreationTimestamp="2026-04-21 10:21:16 +0000 UTC" firstStartedPulling="2026-04-21 10:21:17.215506838 +0000 UTC m=+8.035855297" lastFinishedPulling="2026-04-21 10:21:19.972766183 +0000 UTC m=+10.793114643" observedRunningTime="2026-04-21 10:21:20.612058911 +0000 UTC m=+11.432407375" watchObservedRunningTime="2026-04-21 10:21:20.612159931 +0000 UTC m=+11.432508402" Apr 21 10:21:22.224715 kubelet[2508]: E0421 10:21:22.224411 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:23.123212 update_engine[1446]: I20260421 10:21:23.119073 1446 update_attempter.cc:509] Updating boot flags... Apr 21 10:21:23.175411 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2884) Apr 21 10:21:23.208030 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2888) Apr 21 10:21:24.788069 kubelet[2508]: E0421 10:21:24.787735 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:24.824444 kubelet[2508]: E0421 10:21:24.824141 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:25.867682 sudo[1642]: pam_unix(sudo:session): session closed for user root Apr 21 10:21:25.869547 sshd[1638]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:25.873712 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:51244.service: Deactivated successfully. Apr 21 10:21:25.875191 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:21:25.875334 systemd[1]: session-7.scope: Consumed 5.351s CPU time, 158.9M memory peak, 0B memory swap peak. Apr 21 10:21:25.881071 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:21:25.883230 systemd-logind[1444]: Removed session 7. Apr 21 10:21:27.350028 systemd[1]: Created slice kubepods-besteffort-pod0215b9dd_49aa_4cbb_9cae_e9859e647650.slice - libcontainer container kubepods-besteffort-pod0215b9dd_49aa_4cbb_9cae_e9859e647650.slice. Apr 21 10:21:27.435982 systemd[1]: Created slice kubepods-besteffort-pod7cd98974_bb9c_46c5_ad99_a19f0f84f39c.slice - libcontainer container kubepods-besteffort-pod7cd98974_bb9c_46c5_ad99_a19f0f84f39c.slice. Apr 21 10:21:27.467225 kubelet[2508]: I0421 10:21:27.467091 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0215b9dd-49aa-4cbb-9cae-e9859e647650-typha-certs\") pod \"calico-typha-85658c5784-hn2js\" (UID: \"0215b9dd-49aa-4cbb-9cae-e9859e647650\") " pod="calico-system/calico-typha-85658c5784-hn2js" Apr 21 10:21:27.467225 kubelet[2508]: I0421 10:21:27.467140 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dprw\" (UniqueName: \"kubernetes.io/projected/0215b9dd-49aa-4cbb-9cae-e9859e647650-kube-api-access-8dprw\") pod \"calico-typha-85658c5784-hn2js\" (UID: \"0215b9dd-49aa-4cbb-9cae-e9859e647650\") " pod="calico-system/calico-typha-85658c5784-hn2js" Apr 21 10:21:27.467225 kubelet[2508]: I0421 10:21:27.467162 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0215b9dd-49aa-4cbb-9cae-e9859e647650-tigera-ca-bundle\") pod \"calico-typha-85658c5784-hn2js\" (UID: \"0215b9dd-49aa-4cbb-9cae-e9859e647650\") " pod="calico-system/calico-typha-85658c5784-hn2js" Apr 21 10:21:27.537065 kubelet[2508]: E0421 10:21:27.536986 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xcfq" podUID="8f7d6a7b-34e6-4667-9ed5-9310508d9afb" Apr 21 10:21:27.572268 kubelet[2508]: I0421 10:21:27.569111 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-policysync\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.572268 kubelet[2508]: I0421 10:21:27.569196 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-var-run-calico\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.572268 kubelet[2508]: I0421 10:21:27.569247 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-lib-modules\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.572268 kubelet[2508]: I0421 10:21:27.569265 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-nodeproc\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.572268 kubelet[2508]: I0421 10:21:27.569286 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-cni-log-dir\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.573070 kubelet[2508]: I0421 10:21:27.569304 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-sys-fs\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.573070 kubelet[2508]: I0421 10:21:27.569434 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-node-certs\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.573070 kubelet[2508]: I0421 10:21:27.569446 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtxhg\" (UniqueName: \"kubernetes.io/projected/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-kube-api-access-vtxhg\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.573070 kubelet[2508]: I0421 10:21:27.569459 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-cni-net-dir\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.573070 kubelet[2508]: I0421 10:21:27.569471 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-bpffs\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.573163 kubelet[2508]: I0421 10:21:27.569481 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-var-lib-calico\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.573163 kubelet[2508]: I0421 10:21:27.569492 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-tigera-ca-bundle\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.573163 kubelet[2508]: I0421 10:21:27.569572 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-xtables-lock\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.573163 kubelet[2508]: I0421 10:21:27.569584 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-cni-bin-dir\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.573163 kubelet[2508]: I0421 10:21:27.569633 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7cd98974-bb9c-46c5-ad99-a19f0f84f39c-flexvol-driver-host\") pod \"calico-node-gp7w4\" (UID: \"7cd98974-bb9c-46c5-ad99-a19f0f84f39c\") " pod="calico-system/calico-node-gp7w4" Apr 21 10:21:27.671090 kubelet[2508]: I0421 10:21:27.670985 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8f7d6a7b-34e6-4667-9ed5-9310508d9afb-varrun\") pod \"csi-node-driver-5xcfq\" (UID: \"8f7d6a7b-34e6-4667-9ed5-9310508d9afb\") " pod="calico-system/csi-node-driver-5xcfq" Apr 21 10:21:27.671545 kubelet[2508]: I0421 10:21:27.671257 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8f7d6a7b-34e6-4667-9ed5-9310508d9afb-socket-dir\") pod \"csi-node-driver-5xcfq\" (UID: \"8f7d6a7b-34e6-4667-9ed5-9310508d9afb\") " pod="calico-system/csi-node-driver-5xcfq" Apr 21 10:21:27.672998 kubelet[2508]: I0421 10:21:27.671899 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f7d6a7b-34e6-4667-9ed5-9310508d9afb-kubelet-dir\") pod \"csi-node-driver-5xcfq\" (UID: \"8f7d6a7b-34e6-4667-9ed5-9310508d9afb\") " pod="calico-system/csi-node-driver-5xcfq" Apr 21 10:21:27.672998 kubelet[2508]: I0421 10:21:27.672673 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48bt4\" (UniqueName: \"kubernetes.io/projected/8f7d6a7b-34e6-4667-9ed5-9310508d9afb-kube-api-access-48bt4\") pod \"csi-node-driver-5xcfq\" (UID: \"8f7d6a7b-34e6-4667-9ed5-9310508d9afb\") " pod="calico-system/csi-node-driver-5xcfq" Apr 21 10:21:27.672998 kubelet[2508]: I0421 10:21:27.672737 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8f7d6a7b-34e6-4667-9ed5-9310508d9afb-registration-dir\") pod \"csi-node-driver-5xcfq\" (UID: \"8f7d6a7b-34e6-4667-9ed5-9310508d9afb\") " pod="calico-system/csi-node-driver-5xcfq" Apr 21 10:21:27.694700 kubelet[2508]: E0421 10:21:27.694646 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:27.695447 containerd[1457]: time="2026-04-21T10:21:27.695413039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85658c5784-hn2js,Uid:0215b9dd-49aa-4cbb-9cae-e9859e647650,Namespace:calico-system,Attempt:0,}" Apr 21 10:21:27.729895 containerd[1457]: time="2026-04-21T10:21:27.729017492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:27.729895 containerd[1457]: time="2026-04-21T10:21:27.729154365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:27.729895 containerd[1457]: time="2026-04-21T10:21:27.729164786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:27.729895 containerd[1457]: time="2026-04-21T10:21:27.729887578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:27.743726 containerd[1457]: time="2026-04-21T10:21:27.743688456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gp7w4,Uid:7cd98974-bb9c-46c5-ad99-a19f0f84f39c,Namespace:calico-system,Attempt:0,}" Apr 21 10:21:27.752088 systemd[1]: Started cri-containerd-c913762f62054e9926abf19ad1350a2b5570518ddfce356bb585a250c0684561.scope - libcontainer container c913762f62054e9926abf19ad1350a2b5570518ddfce356bb585a250c0684561. Apr 21 10:21:27.776471 kubelet[2508]: E0421 10:21:27.775344 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.776471 kubelet[2508]: W0421 10:21:27.775420 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.776471 kubelet[2508]: E0421 10:21:27.775790 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.778594 kubelet[2508]: E0421 10:21:27.778083 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.778594 kubelet[2508]: W0421 10:21:27.778094 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.778594 kubelet[2508]: E0421 10:21:27.778221 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.779639 kubelet[2508]: E0421 10:21:27.779548 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.779639 kubelet[2508]: W0421 10:21:27.779559 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.779639 kubelet[2508]: E0421 10:21:27.779569 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.780399 kubelet[2508]: E0421 10:21:27.780329 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.780536 kubelet[2508]: W0421 10:21:27.780488 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.780536 kubelet[2508]: E0421 10:21:27.780526 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.781034 kubelet[2508]: E0421 10:21:27.781005 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.781034 kubelet[2508]: W0421 10:21:27.781015 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.781034 kubelet[2508]: E0421 10:21:27.781025 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.781740 kubelet[2508]: E0421 10:21:27.781620 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.781740 kubelet[2508]: W0421 10:21:27.781631 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.781740 kubelet[2508]: E0421 10:21:27.781641 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.782154 kubelet[2508]: E0421 10:21:27.782034 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.782154 kubelet[2508]: W0421 10:21:27.782044 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.782154 kubelet[2508]: E0421 10:21:27.782053 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.784141 kubelet[2508]: E0421 10:21:27.783981 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.784141 kubelet[2508]: W0421 10:21:27.783992 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.784141 kubelet[2508]: E0421 10:21:27.784002 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.784391 kubelet[2508]: E0421 10:21:27.784269 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.784391 kubelet[2508]: W0421 10:21:27.784276 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.784391 kubelet[2508]: E0421 10:21:27.784284 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.784496 kubelet[2508]: E0421 10:21:27.784478 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.784673 kubelet[2508]: W0421 10:21:27.784571 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.784673 kubelet[2508]: E0421 10:21:27.784587 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.785178 containerd[1457]: time="2026-04-21T10:21:27.785003126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:27.785178 containerd[1457]: time="2026-04-21T10:21:27.785159387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:27.785694 containerd[1457]: time="2026-04-21T10:21:27.785176890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:27.785694 containerd[1457]: time="2026-04-21T10:21:27.785307605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:27.786522 kubelet[2508]: E0421 10:21:27.786418 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.786522 kubelet[2508]: W0421 10:21:27.786428 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.786522 kubelet[2508]: E0421 10:21:27.786438 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.786668 kubelet[2508]: E0421 10:21:27.786661 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.786779 kubelet[2508]: W0421 10:21:27.786694 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.786779 kubelet[2508]: E0421 10:21:27.786704 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.787223 kubelet[2508]: E0421 10:21:27.787213 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.787313 kubelet[2508]: W0421 10:21:27.787305 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.787347 kubelet[2508]: E0421 10:21:27.787341 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.787691 kubelet[2508]: E0421 10:21:27.787683 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.787737 kubelet[2508]: W0421 10:21:27.787731 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.787765 kubelet[2508]: E0421 10:21:27.787760 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.789691 kubelet[2508]: E0421 10:21:27.789679 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.789752 kubelet[2508]: W0421 10:21:27.789745 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.789812 kubelet[2508]: E0421 10:21:27.789804 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.790109 kubelet[2508]: E0421 10:21:27.790102 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.790154 kubelet[2508]: W0421 10:21:27.790148 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.790186 kubelet[2508]: E0421 10:21:27.790181 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.791318 kubelet[2508]: E0421 10:21:27.791307 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.791384 kubelet[2508]: W0421 10:21:27.791377 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.791415 kubelet[2508]: E0421 10:21:27.791410 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.791654 kubelet[2508]: E0421 10:21:27.791645 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.791891 kubelet[2508]: W0421 10:21:27.791857 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.791989 kubelet[2508]: E0421 10:21:27.791978 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.792283 kubelet[2508]: E0421 10:21:27.792274 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.792515 kubelet[2508]: W0421 10:21:27.792373 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.792515 kubelet[2508]: E0421 10:21:27.792385 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.792900 kubelet[2508]: E0421 10:21:27.792780 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.792900 kubelet[2508]: W0421 10:21:27.792797 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.792900 kubelet[2508]: E0421 10:21:27.792806 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.793148 kubelet[2508]: E0421 10:21:27.793141 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.793211 kubelet[2508]: W0421 10:21:27.793181 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.793211 kubelet[2508]: E0421 10:21:27.793190 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.793566 kubelet[2508]: E0421 10:21:27.793543 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.793566 kubelet[2508]: W0421 10:21:27.793551 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.793566 kubelet[2508]: E0421 10:21:27.793558 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.793953 kubelet[2508]: E0421 10:21:27.793877 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.793953 kubelet[2508]: W0421 10:21:27.793885 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.793953 kubelet[2508]: E0421 10:21:27.793892 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.794212 kubelet[2508]: E0421 10:21:27.794181 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.794212 kubelet[2508]: W0421 10:21:27.794188 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.794212 kubelet[2508]: E0421 10:21:27.794194 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.794513 kubelet[2508]: E0421 10:21:27.794449 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.794513 kubelet[2508]: W0421 10:21:27.794458 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.794513 kubelet[2508]: E0421 10:21:27.794502 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.801470 kubelet[2508]: E0421 10:21:27.801435 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:27.801536 kubelet[2508]: W0421 10:21:27.801509 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:27.801536 kubelet[2508]: E0421 10:21:27.801525 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:27.813061 systemd[1]: Started cri-containerd-51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6.scope - libcontainer container 51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6. Apr 21 10:21:27.814706 containerd[1457]: time="2026-04-21T10:21:27.814547829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85658c5784-hn2js,Uid:0215b9dd-49aa-4cbb-9cae-e9859e647650,Namespace:calico-system,Attempt:0,} returns sandbox id \"c913762f62054e9926abf19ad1350a2b5570518ddfce356bb585a250c0684561\"" Apr 21 10:21:27.815539 kubelet[2508]: E0421 10:21:27.815495 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:27.817593 containerd[1457]: time="2026-04-21T10:21:27.817556097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:21:27.845362 containerd[1457]: time="2026-04-21T10:21:27.845158801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gp7w4,Uid:7cd98974-bb9c-46c5-ad99-a19f0f84f39c,Namespace:calico-system,Attempt:0,} returns sandbox id \"51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6\"" Apr 21 10:21:29.314199 kubelet[2508]: E0421 10:21:29.314035 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xcfq" podUID="8f7d6a7b-34e6-4667-9ed5-9310508d9afb" Apr 21 10:21:29.406796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526471500.mount: Deactivated successfully. Apr 21 10:21:30.747660 containerd[1457]: time="2026-04-21T10:21:30.747413332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:30.748859 containerd[1457]: time="2026-04-21T10:21:30.748328608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:21:30.749218 containerd[1457]: time="2026-04-21T10:21:30.749181692Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:30.755439 containerd[1457]: time="2026-04-21T10:21:30.755229705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:30.756948 containerd[1457]: time="2026-04-21T10:21:30.755817711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.93821999s" Apr 21 10:21:30.756948 containerd[1457]: time="2026-04-21T10:21:30.755852514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:21:30.758504 containerd[1457]: time="2026-04-21T10:21:30.758477685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:21:30.769703 containerd[1457]: time="2026-04-21T10:21:30.769668661Z" level=info msg="CreateContainer within sandbox \"c913762f62054e9926abf19ad1350a2b5570518ddfce356bb585a250c0684561\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:21:30.782828 containerd[1457]: time="2026-04-21T10:21:30.782782629Z" level=info msg="CreateContainer within sandbox \"c913762f62054e9926abf19ad1350a2b5570518ddfce356bb585a250c0684561\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a2cbc68781fc327b9823b235851db40293199f5c57beb1fc433182820b6c98fc\"" Apr 21 10:21:30.783281 containerd[1457]: time="2026-04-21T10:21:30.783262396Z" level=info msg="StartContainer for \"a2cbc68781fc327b9823b235851db40293199f5c57beb1fc433182820b6c98fc\"" Apr 21 10:21:30.822107 systemd[1]: Started cri-containerd-a2cbc68781fc327b9823b235851db40293199f5c57beb1fc433182820b6c98fc.scope - libcontainer container a2cbc68781fc327b9823b235851db40293199f5c57beb1fc433182820b6c98fc. Apr 21 10:21:30.869404 containerd[1457]: time="2026-04-21T10:21:30.869287115Z" level=info msg="StartContainer for \"a2cbc68781fc327b9823b235851db40293199f5c57beb1fc433182820b6c98fc\" returns successfully" Apr 21 10:21:31.316882 kubelet[2508]: E0421 10:21:31.316753 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xcfq" podUID="8f7d6a7b-34e6-4667-9ed5-9310508d9afb" Apr 21 10:21:31.639534 kubelet[2508]: E0421 10:21:31.639193 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:31.659728 kubelet[2508]: I0421 10:21:31.659560 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-85658c5784-hn2js" podStartSLOduration=1.718269651 podStartE2EDuration="4.65954742s" podCreationTimestamp="2026-04-21 10:21:27 +0000 UTC" firstStartedPulling="2026-04-21 10:21:27.817015173 +0000 UTC m=+18.637363643" lastFinishedPulling="2026-04-21 10:21:30.758292952 +0000 UTC m=+21.578641412" observedRunningTime="2026-04-21 10:21:31.658769026 +0000 UTC m=+22.479117493" watchObservedRunningTime="2026-04-21 10:21:31.65954742 +0000 UTC m=+22.479895943" Apr 21 10:21:31.725380 kubelet[2508]: E0421 10:21:31.725145 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.725380 kubelet[2508]: W0421 10:21:31.725172 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.725380 kubelet[2508]: E0421 10:21:31.725487 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.725380 kubelet[2508]: E0421 10:21:31.725691 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.727387 kubelet[2508]: W0421 10:21:31.725698 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.727387 kubelet[2508]: E0421 10:21:31.725709 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.727387 kubelet[2508]: E0421 10:21:31.725897 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.727387 kubelet[2508]: W0421 10:21:31.725926 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.727387 kubelet[2508]: E0421 10:21:31.725934 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.727387 kubelet[2508]: E0421 10:21:31.726255 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.727387 kubelet[2508]: W0421 10:21:31.726262 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.727387 kubelet[2508]: E0421 10:21:31.726269 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.727387 kubelet[2508]: E0421 10:21:31.726412 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.727387 kubelet[2508]: W0421 10:21:31.726416 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.727865 kubelet[2508]: E0421 10:21:31.726421 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.727865 kubelet[2508]: E0421 10:21:31.726546 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.727865 kubelet[2508]: W0421 10:21:31.726550 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.727865 kubelet[2508]: E0421 10:21:31.726555 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.727865 kubelet[2508]: E0421 10:21:31.726695 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.727865 kubelet[2508]: W0421 10:21:31.726699 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.727865 kubelet[2508]: E0421 10:21:31.726706 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.727865 kubelet[2508]: E0421 10:21:31.726827 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.727865 kubelet[2508]: W0421 10:21:31.726831 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.727865 kubelet[2508]: E0421 10:21:31.726836 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.728048 kubelet[2508]: E0421 10:21:31.727014 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.728048 kubelet[2508]: W0421 10:21:31.727019 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.728048 kubelet[2508]: E0421 10:21:31.727025 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.728048 kubelet[2508]: E0421 10:21:31.727141 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.728048 kubelet[2508]: W0421 10:21:31.727146 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.728048 kubelet[2508]: E0421 10:21:31.727151 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.728048 kubelet[2508]: E0421 10:21:31.727268 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.728048 kubelet[2508]: W0421 10:21:31.727272 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.728048 kubelet[2508]: E0421 10:21:31.727276 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.728048 kubelet[2508]: E0421 10:21:31.727475 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.728300 kubelet[2508]: W0421 10:21:31.727480 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.728300 kubelet[2508]: E0421 10:21:31.727486 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.728300 kubelet[2508]: E0421 10:21:31.727722 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.728300 kubelet[2508]: W0421 10:21:31.727734 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.728300 kubelet[2508]: E0421 10:21:31.727749 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.728300 kubelet[2508]: E0421 10:21:31.728015 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.728300 kubelet[2508]: W0421 10:21:31.728021 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.728300 kubelet[2508]: E0421 10:21:31.728027 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.728300 kubelet[2508]: E0421 10:21:31.728177 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.728300 kubelet[2508]: W0421 10:21:31.728181 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.728483 kubelet[2508]: E0421 10:21:31.728186 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.728601 kubelet[2508]: E0421 10:21:31.728565 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.728601 kubelet[2508]: W0421 10:21:31.728580 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.728601 kubelet[2508]: E0421 10:21:31.728590 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.728766 kubelet[2508]: E0421 10:21:31.728736 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.728766 kubelet[2508]: W0421 10:21:31.728749 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.728766 kubelet[2508]: E0421 10:21:31.728755 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.729113 kubelet[2508]: E0421 10:21:31.728983 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.729113 kubelet[2508]: W0421 10:21:31.728997 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.729113 kubelet[2508]: E0421 10:21:31.729012 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.729454 kubelet[2508]: E0421 10:21:31.729419 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.729454 kubelet[2508]: W0421 10:21:31.729435 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.729454 kubelet[2508]: E0421 10:21:31.729444 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.729663 kubelet[2508]: E0421 10:21:31.729645 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.729663 kubelet[2508]: W0421 10:21:31.729656 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.729663 kubelet[2508]: E0421 10:21:31.729663 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.729853 kubelet[2508]: E0421 10:21:31.729840 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.729853 kubelet[2508]: W0421 10:21:31.729852 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.729929 kubelet[2508]: E0421 10:21:31.729859 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.730039 kubelet[2508]: E0421 10:21:31.730024 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.730059 kubelet[2508]: W0421 10:21:31.730040 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.730059 kubelet[2508]: E0421 10:21:31.730049 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.730273 kubelet[2508]: E0421 10:21:31.730246 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.730273 kubelet[2508]: W0421 10:21:31.730260 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.730273 kubelet[2508]: E0421 10:21:31.730266 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.730547 kubelet[2508]: E0421 10:21:31.730525 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.730547 kubelet[2508]: W0421 10:21:31.730538 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.730547 kubelet[2508]: E0421 10:21:31.730544 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.730929 kubelet[2508]: E0421 10:21:31.730877 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.730929 kubelet[2508]: W0421 10:21:31.730917 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.730967 kubelet[2508]: E0421 10:21:31.730930 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.731143 kubelet[2508]: E0421 10:21:31.731119 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.731143 kubelet[2508]: W0421 10:21:31.731132 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.731143 kubelet[2508]: E0421 10:21:31.731139 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.731372 kubelet[2508]: E0421 10:21:31.731358 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.731394 kubelet[2508]: W0421 10:21:31.731372 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.731394 kubelet[2508]: E0421 10:21:31.731383 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.731673 kubelet[2508]: E0421 10:21:31.731659 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.731695 kubelet[2508]: W0421 10:21:31.731674 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.731695 kubelet[2508]: E0421 10:21:31.731684 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.731853 kubelet[2508]: E0421 10:21:31.731843 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.731853 kubelet[2508]: W0421 10:21:31.731853 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.731886 kubelet[2508]: E0421 10:21:31.731858 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.732050 kubelet[2508]: E0421 10:21:31.732038 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.732050 kubelet[2508]: W0421 10:21:31.732049 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.732092 kubelet[2508]: E0421 10:21:31.732055 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.732278 kubelet[2508]: E0421 10:21:31.732265 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.732299 kubelet[2508]: W0421 10:21:31.732279 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.732299 kubelet[2508]: E0421 10:21:31.732288 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.732508 kubelet[2508]: E0421 10:21:31.732497 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.732508 kubelet[2508]: W0421 10:21:31.732507 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.732542 kubelet[2508]: E0421 10:21:31.732514 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:31.732695 kubelet[2508]: E0421 10:21:31.732684 2508 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:21:31.732695 kubelet[2508]: W0421 10:21:31.732694 2508 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:21:31.732729 kubelet[2508]: E0421 10:21:31.732701 2508 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:21:32.240807 containerd[1457]: time="2026-04-21T10:21:32.240586815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:32.242882 containerd[1457]: time="2026-04-21T10:21:32.241265275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:21:32.242997 containerd[1457]: time="2026-04-21T10:21:32.242971521Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:32.244822 containerd[1457]: time="2026-04-21T10:21:32.244774059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:32.245540 containerd[1457]: time="2026-04-21T10:21:32.245514816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.487008431s" Apr 21 10:21:32.245575 containerd[1457]: time="2026-04-21T10:21:32.245546716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:21:32.251384 containerd[1457]: time="2026-04-21T10:21:32.251332761Z" level=info msg="CreateContainer within sandbox \"51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:21:32.286118 containerd[1457]: time="2026-04-21T10:21:32.284870152Z" level=info msg="CreateContainer within sandbox \"51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dd5b44095593923bddd9eea4e6acb878fe40c436d401c6c29f836fdd9699e791\"" Apr 21 10:21:32.290511 containerd[1457]: time="2026-04-21T10:21:32.290474798Z" level=info msg="StartContainer for \"dd5b44095593923bddd9eea4e6acb878fe40c436d401c6c29f836fdd9699e791\"" Apr 21 10:21:32.348112 systemd[1]: Started cri-containerd-dd5b44095593923bddd9eea4e6acb878fe40c436d401c6c29f836fdd9699e791.scope - libcontainer container dd5b44095593923bddd9eea4e6acb878fe40c436d401c6c29f836fdd9699e791. Apr 21 10:21:32.404929 containerd[1457]: time="2026-04-21T10:21:32.404485798Z" level=info msg="StartContainer for \"dd5b44095593923bddd9eea4e6acb878fe40c436d401c6c29f836fdd9699e791\" returns successfully" Apr 21 10:21:32.405136 systemd[1]: cri-containerd-dd5b44095593923bddd9eea4e6acb878fe40c436d401c6c29f836fdd9699e791.scope: Deactivated successfully. Apr 21 10:21:32.447810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd5b44095593923bddd9eea4e6acb878fe40c436d401c6c29f836fdd9699e791-rootfs.mount: Deactivated successfully. Apr 21 10:21:32.456050 containerd[1457]: time="2026-04-21T10:21:32.455996321Z" level=info msg="shim disconnected" id=dd5b44095593923bddd9eea4e6acb878fe40c436d401c6c29f836fdd9699e791 namespace=k8s.io Apr 21 10:21:32.456194 containerd[1457]: time="2026-04-21T10:21:32.456058955Z" level=warning msg="cleaning up after shim disconnected" id=dd5b44095593923bddd9eea4e6acb878fe40c436d401c6c29f836fdd9699e791 namespace=k8s.io Apr 21 10:21:32.456194 containerd[1457]: time="2026-04-21T10:21:32.456068962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:21:32.649707 kubelet[2508]: I0421 10:21:32.648714 2508 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:21:32.649707 kubelet[2508]: E0421 10:21:32.649352 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:32.650826 containerd[1457]: time="2026-04-21T10:21:32.649246308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:21:33.316649 kubelet[2508]: E0421 10:21:33.316389 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xcfq" podUID="8f7d6a7b-34e6-4667-9ed5-9310508d9afb" Apr 21 10:21:35.315312 kubelet[2508]: E0421 10:21:35.314951 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xcfq" podUID="8f7d6a7b-34e6-4667-9ed5-9310508d9afb" Apr 21 10:21:36.550410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140676600.mount: Deactivated successfully. Apr 21 10:21:36.771042 containerd[1457]: time="2026-04-21T10:21:36.770649223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:21:36.776781 containerd[1457]: time="2026-04-21T10:21:36.776718792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.12742948s" Apr 21 10:21:36.776781 containerd[1457]: time="2026-04-21T10:21:36.776775719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:21:36.783649 containerd[1457]: time="2026-04-21T10:21:36.783597010Z" level=info msg="CreateContainer within sandbox \"51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:21:36.785058 containerd[1457]: time="2026-04-21T10:21:36.784996607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:36.785652 containerd[1457]: time="2026-04-21T10:21:36.785621698Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:36.788642 containerd[1457]: time="2026-04-21T10:21:36.788600277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:36.810257 containerd[1457]: time="2026-04-21T10:21:36.809575998Z" level=info msg="CreateContainer within sandbox \"51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"63bbe86a7524cbe53a9b5638305258b44476a38b26035b15af92d411df79d029\"" Apr 21 10:21:36.812441 containerd[1457]: time="2026-04-21T10:21:36.812392547Z" level=info msg="StartContainer for \"63bbe86a7524cbe53a9b5638305258b44476a38b26035b15af92d411df79d029\"" Apr 21 10:21:36.949489 systemd[1]: Started cri-containerd-63bbe86a7524cbe53a9b5638305258b44476a38b26035b15af92d411df79d029.scope - libcontainer container 63bbe86a7524cbe53a9b5638305258b44476a38b26035b15af92d411df79d029. Apr 21 10:21:37.013440 containerd[1457]: time="2026-04-21T10:21:37.012862517Z" level=info msg="StartContainer for \"63bbe86a7524cbe53a9b5638305258b44476a38b26035b15af92d411df79d029\" returns successfully" Apr 21 10:21:37.106897 systemd[1]: cri-containerd-63bbe86a7524cbe53a9b5638305258b44476a38b26035b15af92d411df79d029.scope: Deactivated successfully. Apr 21 10:21:37.144914 containerd[1457]: time="2026-04-21T10:21:37.144410853Z" level=info msg="shim disconnected" id=63bbe86a7524cbe53a9b5638305258b44476a38b26035b15af92d411df79d029 namespace=k8s.io Apr 21 10:21:37.144914 containerd[1457]: time="2026-04-21T10:21:37.144693443Z" level=warning msg="cleaning up after shim disconnected" id=63bbe86a7524cbe53a9b5638305258b44476a38b26035b15af92d411df79d029 namespace=k8s.io Apr 21 10:21:37.144914 containerd[1457]: time="2026-04-21T10:21:37.144701589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:21:37.314654 kubelet[2508]: E0421 10:21:37.314300 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xcfq" podUID="8f7d6a7b-34e6-4667-9ed5-9310508d9afb" Apr 21 10:21:37.552435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63bbe86a7524cbe53a9b5638305258b44476a38b26035b15af92d411df79d029-rootfs.mount: Deactivated successfully. Apr 21 10:21:37.671812 containerd[1457]: time="2026-04-21T10:21:37.671648950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:21:39.314383 kubelet[2508]: E0421 10:21:39.314174 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5xcfq" podUID="8f7d6a7b-34e6-4667-9ed5-9310508d9afb" Apr 21 10:21:40.185951 containerd[1457]: time="2026-04-21T10:21:40.185710431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:40.187432 containerd[1457]: time="2026-04-21T10:21:40.186652668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:21:40.188108 containerd[1457]: time="2026-04-21T10:21:40.188069712Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:40.191387 containerd[1457]: time="2026-04-21T10:21:40.191335154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:40.192224 containerd[1457]: time="2026-04-21T10:21:40.192199545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.520504873s" Apr 21 10:21:40.192258 containerd[1457]: time="2026-04-21T10:21:40.192227573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:21:40.206481 containerd[1457]: time="2026-04-21T10:21:40.206374177Z" level=info msg="CreateContainer within sandbox \"51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:21:40.220106 containerd[1457]: time="2026-04-21T10:21:40.220057052Z" level=info msg="CreateContainer within sandbox \"51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2d3c1a4b5289aa3b09c36e6f7979af27c9de0a3a27ee963bbcb57cd50a263cee\"" Apr 21 10:21:40.221625 containerd[1457]: time="2026-04-21T10:21:40.220517361Z" level=info msg="StartContainer for \"2d3c1a4b5289aa3b09c36e6f7979af27c9de0a3a27ee963bbcb57cd50a263cee\"" Apr 21 10:21:40.262090 systemd[1]: Started cri-containerd-2d3c1a4b5289aa3b09c36e6f7979af27c9de0a3a27ee963bbcb57cd50a263cee.scope - libcontainer container 2d3c1a4b5289aa3b09c36e6f7979af27c9de0a3a27ee963bbcb57cd50a263cee. Apr 21 10:21:40.323368 containerd[1457]: time="2026-04-21T10:21:40.323288571Z" level=info msg="StartContainer for \"2d3c1a4b5289aa3b09c36e6f7979af27c9de0a3a27ee963bbcb57cd50a263cee\" returns successfully" Apr 21 10:21:40.942930 systemd[1]: cri-containerd-2d3c1a4b5289aa3b09c36e6f7979af27c9de0a3a27ee963bbcb57cd50a263cee.scope: Deactivated successfully. Apr 21 10:21:40.961609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d3c1a4b5289aa3b09c36e6f7979af27c9de0a3a27ee963bbcb57cd50a263cee-rootfs.mount: Deactivated successfully. Apr 21 10:21:40.995245 containerd[1457]: time="2026-04-21T10:21:40.995000108Z" level=info msg="shim disconnected" id=2d3c1a4b5289aa3b09c36e6f7979af27c9de0a3a27ee963bbcb57cd50a263cee namespace=k8s.io Apr 21 10:21:40.995245 containerd[1457]: time="2026-04-21T10:21:40.995182641Z" level=warning msg="cleaning up after shim disconnected" id=2d3c1a4b5289aa3b09c36e6f7979af27c9de0a3a27ee963bbcb57cd50a263cee namespace=k8s.io Apr 21 10:21:40.995245 containerd[1457]: time="2026-04-21T10:21:40.995190444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:21:41.033497 kubelet[2508]: I0421 10:21:41.033468 2508 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 10:21:41.072487 systemd[1]: Created slice kubepods-burstable-pod220f4521_b6d9_4c5d_96ae_7597b58ee030.slice - libcontainer container kubepods-burstable-pod220f4521_b6d9_4c5d_96ae_7597b58ee030.slice. Apr 21 10:21:41.082189 systemd[1]: Created slice kubepods-besteffort-podfd00ac23_0fb5_4d7f_956d_e123593d4ebc.slice - libcontainer container kubepods-besteffort-podfd00ac23_0fb5_4d7f_956d_e123593d4ebc.slice. Apr 21 10:21:41.086283 systemd[1]: Created slice kubepods-burstable-pod6645c0ad_7034_4d39_a7d9_4a1d8fcc7de5.slice - libcontainer container kubepods-burstable-pod6645c0ad_7034_4d39_a7d9_4a1d8fcc7de5.slice. Apr 21 10:21:41.090114 systemd[1]: Created slice kubepods-besteffort-podc70b1305_257f_44b6_ab9b_a0c251378e0f.slice - libcontainer container kubepods-besteffort-podc70b1305_257f_44b6_ab9b_a0c251378e0f.slice. Apr 21 10:21:41.092982 systemd[1]: Created slice kubepods-besteffort-pod386cc7c2_feea_4942_a60b_423727e06d40.slice - libcontainer container kubepods-besteffort-pod386cc7c2_feea_4942_a60b_423727e06d40.slice. Apr 21 10:21:41.104152 systemd[1]: Created slice kubepods-besteffort-pod7fa99074_099e_4ed7_ae7d_e7227f1db188.slice - libcontainer container kubepods-besteffort-pod7fa99074_099e_4ed7_ae7d_e7227f1db188.slice. Apr 21 10:21:41.114423 systemd[1]: Created slice kubepods-besteffort-pode4b23b05_a52a_44ea_b704_ed8f7e3ac456.slice - libcontainer container kubepods-besteffort-pode4b23b05_a52a_44ea_b704_ed8f7e3ac456.slice. Apr 21 10:21:41.256156 kubelet[2508]: I0421 10:21:41.255329 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fa99074-099e-4ed7-ae7d-e7227f1db188-whisker-ca-bundle\") pod \"whisker-7f78d88875-ckm7h\" (UID: \"7fa99074-099e-4ed7-ae7d-e7227f1db188\") " pod="calico-system/whisker-7f78d88875-ckm7h" Apr 21 10:21:41.256156 kubelet[2508]: I0421 10:21:41.255400 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29r6x\" (UniqueName: \"kubernetes.io/projected/7fa99074-099e-4ed7-ae7d-e7227f1db188-kube-api-access-29r6x\") pod \"whisker-7f78d88875-ckm7h\" (UID: \"7fa99074-099e-4ed7-ae7d-e7227f1db188\") " pod="calico-system/whisker-7f78d88875-ckm7h" Apr 21 10:21:41.256156 kubelet[2508]: I0421 10:21:41.255417 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp69f\" (UniqueName: \"kubernetes.io/projected/386cc7c2-feea-4942-a60b-423727e06d40-kube-api-access-jp69f\") pod \"calico-apiserver-7999b6f797-gkbt6\" (UID: \"386cc7c2-feea-4942-a60b-423727e06d40\") " pod="calico-system/calico-apiserver-7999b6f797-gkbt6" Apr 21 10:21:41.256156 kubelet[2508]: I0421 10:21:41.255436 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfq7q\" (UniqueName: \"kubernetes.io/projected/6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5-kube-api-access-zfq7q\") pod \"coredns-7d764666f9-4jtjk\" (UID: \"6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5\") " pod="kube-system/coredns-7d764666f9-4jtjk" Apr 21 10:21:41.256156 kubelet[2508]: I0421 10:21:41.255708 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nvcj\" (UniqueName: \"kubernetes.io/projected/e4b23b05-a52a-44ea-b704-ed8f7e3ac456-kube-api-access-5nvcj\") pod \"calico-kube-controllers-657c5b854d-hstpz\" (UID: \"e4b23b05-a52a-44ea-b704-ed8f7e3ac456\") " pod="calico-system/calico-kube-controllers-657c5b854d-hstpz" Apr 21 10:21:41.257389 kubelet[2508]: I0421 10:21:41.255785 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fd00ac23-0fb5-4d7f-956d-e123593d4ebc-goldmane-key-pair\") pod \"goldmane-9f7667bb8-zv4b9\" (UID: \"fd00ac23-0fb5-4d7f-956d-e123593d4ebc\") " pod="calico-system/goldmane-9f7667bb8-zv4b9" Apr 21 10:21:41.257389 kubelet[2508]: I0421 10:21:41.255802 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c70b1305-257f-44b6-ab9b-a0c251378e0f-calico-apiserver-certs\") pod \"calico-apiserver-7999b6f797-z5ch8\" (UID: \"c70b1305-257f-44b6-ab9b-a0c251378e0f\") " pod="calico-system/calico-apiserver-7999b6f797-z5ch8" Apr 21 10:21:41.257389 kubelet[2508]: I0421 10:21:41.255815 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7fa99074-099e-4ed7-ae7d-e7227f1db188-whisker-backend-key-pair\") pod \"whisker-7f78d88875-ckm7h\" (UID: \"7fa99074-099e-4ed7-ae7d-e7227f1db188\") " pod="calico-system/whisker-7f78d88875-ckm7h" Apr 21 10:21:41.257389 kubelet[2508]: I0421 10:21:41.255835 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd00ac23-0fb5-4d7f-956d-e123593d4ebc-config\") pod \"goldmane-9f7667bb8-zv4b9\" (UID: \"fd00ac23-0fb5-4d7f-956d-e123593d4ebc\") " pod="calico-system/goldmane-9f7667bb8-zv4b9" Apr 21 10:21:41.257389 kubelet[2508]: I0421 10:21:41.255849 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zqlb\" (UniqueName: \"kubernetes.io/projected/fd00ac23-0fb5-4d7f-956d-e123593d4ebc-kube-api-access-8zqlb\") pod \"goldmane-9f7667bb8-zv4b9\" (UID: \"fd00ac23-0fb5-4d7f-956d-e123593d4ebc\") " pod="calico-system/goldmane-9f7667bb8-zv4b9" Apr 21 10:21:41.257473 kubelet[2508]: I0421 10:21:41.255861 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4b23b05-a52a-44ea-b704-ed8f7e3ac456-tigera-ca-bundle\") pod \"calico-kube-controllers-657c5b854d-hstpz\" (UID: \"e4b23b05-a52a-44ea-b704-ed8f7e3ac456\") " pod="calico-system/calico-kube-controllers-657c5b854d-hstpz" Apr 21 10:21:41.257473 kubelet[2508]: I0421 10:21:41.255879 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7fa99074-099e-4ed7-ae7d-e7227f1db188-nginx-config\") pod \"whisker-7f78d88875-ckm7h\" (UID: \"7fa99074-099e-4ed7-ae7d-e7227f1db188\") " pod="calico-system/whisker-7f78d88875-ckm7h" Apr 21 10:21:41.257473 kubelet[2508]: I0421 10:21:41.256010 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd00ac23-0fb5-4d7f-956d-e123593d4ebc-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-zv4b9\" (UID: \"fd00ac23-0fb5-4d7f-956d-e123593d4ebc\") " pod="calico-system/goldmane-9f7667bb8-zv4b9" Apr 21 10:21:41.257473 kubelet[2508]: I0421 10:21:41.256032 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/386cc7c2-feea-4942-a60b-423727e06d40-calico-apiserver-certs\") pod \"calico-apiserver-7999b6f797-gkbt6\" (UID: \"386cc7c2-feea-4942-a60b-423727e06d40\") " pod="calico-system/calico-apiserver-7999b6f797-gkbt6" Apr 21 10:21:41.257473 kubelet[2508]: I0421 10:21:41.256064 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtzsj\" (UniqueName: \"kubernetes.io/projected/220f4521-b6d9-4c5d-96ae-7597b58ee030-kube-api-access-rtzsj\") pod \"coredns-7d764666f9-wgmqg\" (UID: \"220f4521-b6d9-4c5d-96ae-7597b58ee030\") " pod="kube-system/coredns-7d764666f9-wgmqg" Apr 21 10:21:41.257599 kubelet[2508]: I0421 10:21:41.256101 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh92t\" (UniqueName: \"kubernetes.io/projected/c70b1305-257f-44b6-ab9b-a0c251378e0f-kube-api-access-zh92t\") pod \"calico-apiserver-7999b6f797-z5ch8\" (UID: \"c70b1305-257f-44b6-ab9b-a0c251378e0f\") " pod="calico-system/calico-apiserver-7999b6f797-z5ch8" Apr 21 10:21:41.257599 kubelet[2508]: I0421 10:21:41.256203 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5-config-volume\") pod \"coredns-7d764666f9-4jtjk\" (UID: \"6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5\") " pod="kube-system/coredns-7d764666f9-4jtjk" Apr 21 10:21:41.257599 kubelet[2508]: I0421 10:21:41.256266 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/220f4521-b6d9-4c5d-96ae-7597b58ee030-config-volume\") pod \"coredns-7d764666f9-wgmqg\" (UID: \"220f4521-b6d9-4c5d-96ae-7597b58ee030\") " pod="kube-system/coredns-7d764666f9-wgmqg" Apr 21 10:21:41.324831 systemd[1]: Created slice kubepods-besteffort-pod8f7d6a7b_34e6_4667_9ed5_9310508d9afb.slice - libcontainer container kubepods-besteffort-pod8f7d6a7b_34e6_4667_9ed5_9310508d9afb.slice. Apr 21 10:21:41.330892 containerd[1457]: time="2026-04-21T10:21:41.330830726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5xcfq,Uid:8f7d6a7b-34e6-4667-9ed5-9310508d9afb,Namespace:calico-system,Attempt:0,}" Apr 21 10:21:41.440983 containerd[1457]: time="2026-04-21T10:21:41.440679469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-657c5b854d-hstpz,Uid:e4b23b05-a52a-44ea-b704-ed8f7e3ac456,Namespace:calico-system,Attempt:0,}" Apr 21 10:21:41.525979 containerd[1457]: time="2026-04-21T10:21:41.525512978Z" level=error msg="Failed to destroy network for sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:41.526422 containerd[1457]: time="2026-04-21T10:21:41.526062203Z" level=error msg="encountered an error cleaning up failed sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:41.526422 containerd[1457]: time="2026-04-21T10:21:41.526108545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5xcfq,Uid:8f7d6a7b-34e6-4667-9ed5-9310508d9afb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:41.533829 kubelet[2508]: E0421 10:21:41.533769 2508 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:41.534034 kubelet[2508]: E0421 10:21:41.533954 2508 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5xcfq" Apr 21 10:21:41.534034 kubelet[2508]: E0421 10:21:41.534030 2508 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5xcfq" Apr 21 10:21:41.534265 kubelet[2508]: E0421 10:21:41.534146 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5xcfq_calico-system(8f7d6a7b-34e6-4667-9ed5-9310508d9afb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5xcfq_calico-system(8f7d6a7b-34e6-4667-9ed5-9310508d9afb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5xcfq" podUID="8f7d6a7b-34e6-4667-9ed5-9310508d9afb" Apr 21 10:21:41.539080 containerd[1457]: time="2026-04-21T10:21:41.539003712Z" level=error msg="Failed to destroy network for sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:41.539346 containerd[1457]: time="2026-04-21T10:21:41.539302689Z" level=error msg="encountered an error cleaning up failed sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:41.539400 containerd[1457]: time="2026-04-21T10:21:41.539378140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-657c5b854d-hstpz,Uid:e4b23b05-a52a-44ea-b704-ed8f7e3ac456,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:41.539682 kubelet[2508]: E0421 10:21:41.539631 2508 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:41.539761 kubelet[2508]: E0421 10:21:41.539685 2508 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-657c5b854d-hstpz" Apr 21 10:21:41.539761 kubelet[2508]: E0421 10:21:41.539703 2508 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-657c5b854d-hstpz" Apr 21 10:21:41.539824 kubelet[2508]: E0421 10:21:41.539754 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-657c5b854d-hstpz_calico-system(e4b23b05-a52a-44ea-b704-ed8f7e3ac456)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-657c5b854d-hstpz_calico-system(e4b23b05-a52a-44ea-b704-ed8f7e3ac456)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-657c5b854d-hstpz" podUID="e4b23b05-a52a-44ea-b704-ed8f7e3ac456" Apr 21 10:21:41.686073 kubelet[2508]: E0421 10:21:41.685708 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:41.688484 containerd[1457]: time="2026-04-21T10:21:41.688419783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wgmqg,Uid:220f4521-b6d9-4c5d-96ae-7597b58ee030,Namespace:kube-system,Attempt:0,}" Apr 21 10:21:41.690212 containerd[1457]: time="2026-04-21T10:21:41.690158019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-zv4b9,Uid:fd00ac23-0fb5-4d7f-956d-e123593d4ebc,Namespace:calico-system,Attempt:0,}" Apr 21 10:21:41.695965 kubelet[2508]: E0421 10:21:41.694767 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:41.697874 containerd[1457]: time="2026-04-21T10:21:41.697814853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4jtjk,Uid:6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5,Namespace:kube-system,Attempt:0,}" Apr 21 10:21:41.704322 containerd[1457]: time="2026-04-21T10:21:41.701174741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7999b6f797-z5ch8,Uid:c70b1305-257f-44b6-ab9b-a0c251378e0f,Namespace:calico-system,Attempt:0,}" Apr 21 10:21:41.728611 containerd[1457]: time="2026-04-21T10:21:41.727267067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7999b6f797-gkbt6,Uid:386cc7c2-feea-4942-a60b-423727e06d40,Namespace:calico-system,Attempt:0,}" Apr 21 10:21:41.732686 containerd[1457]: time="2026-04-21T10:21:41.732158639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f78d88875-ckm7h,Uid:7fa99074-099e-4ed7-ae7d-e7227f1db188,Namespace:calico-system,Attempt:0,}" Apr 21 10:21:41.745334 kubelet[2508]: I0421 10:21:41.744994 2508 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:21:41.779311 kubelet[2508]: I0421 10:21:41.775447 2508 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:21:41.790147 containerd[1457]: time="2026-04-21T10:21:41.789399947Z" level=info msg="StopPodSandbox for \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\"" Apr 21 10:21:41.791677 containerd[1457]: time="2026-04-21T10:21:41.791083547Z" level=info msg="StopPodSandbox for \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\"" Apr 21 10:21:41.792207 containerd[1457]: time="2026-04-21T10:21:41.792166958Z" level=info msg="Ensure that sandbox 3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d in task-service has been cleanup successfully" Apr 21 10:21:41.798982 containerd[1457]: time="2026-04-21T10:21:41.795624191Z" level=info msg="Ensure that sandbox d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7 in task-service has been cleanup successfully" Apr 21 10:21:41.798982 containerd[1457]: time="2026-04-21T10:21:41.797489458Z" level=info msg="CreateContainer within sandbox \"51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:21:41.832342 kubelet[2508]: I0421 10:21:41.832119 2508 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:21:41.833285 kubelet[2508]: E0421 10:21:41.832780 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:41.949948 containerd[1457]: time="2026-04-21T10:21:41.947234642Z" level=error msg="StopPodSandbox for \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\" failed" error="failed to destroy network for sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:41.950107 kubelet[2508]: E0421 10:21:41.948121 2508 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:21:41.950107 kubelet[2508]: E0421 10:21:41.948166 2508 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7"} Apr 21 10:21:41.950107 kubelet[2508]: E0421 10:21:41.948589 2508 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4b23b05-a52a-44ea-b704-ed8f7e3ac456\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:21:41.950107 kubelet[2508]: E0421 10:21:41.948615 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4b23b05-a52a-44ea-b704-ed8f7e3ac456\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-657c5b854d-hstpz" podUID="e4b23b05-a52a-44ea-b704-ed8f7e3ac456" Apr 21 10:21:41.963151 containerd[1457]: time="2026-04-21T10:21:41.963031066Z" level=error msg="StopPodSandbox for \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\" failed" error="failed to destroy network for sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:41.965634 kubelet[2508]: E0421 10:21:41.965478 2508 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:21:41.965735 kubelet[2508]: E0421 10:21:41.965633 2508 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d"} Apr 21 10:21:41.965782 kubelet[2508]: E0421 10:21:41.965706 2508 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f7d6a7b-34e6-4667-9ed5-9310508d9afb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:21:41.965892 kubelet[2508]: E0421 10:21:41.965878 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f7d6a7b-34e6-4667-9ed5-9310508d9afb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5xcfq" podUID="8f7d6a7b-34e6-4667-9ed5-9310508d9afb" Apr 21 10:21:42.002991 containerd[1457]: time="2026-04-21T10:21:42.002558193Z" level=info msg="CreateContainer within sandbox \"51cb8b0ce1ae09041d45c57dc2a96c9d6ddbe7fd335a271249b6d784b34eb5a6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3989638f7099e68fa6ab58d22fc4794094e7ac9dbde9aee334d063a5cfce5c98\"" Apr 21 10:21:42.009599 containerd[1457]: time="2026-04-21T10:21:42.008337618Z" level=info msg="StartContainer for \"3989638f7099e68fa6ab58d22fc4794094e7ac9dbde9aee334d063a5cfce5c98\"" Apr 21 10:21:42.062601 containerd[1457]: time="2026-04-21T10:21:42.061667480Z" level=error msg="Failed to destroy network for sandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.063498 containerd[1457]: time="2026-04-21T10:21:42.063162919Z" level=error msg="encountered an error cleaning up failed sandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.064318 containerd[1457]: time="2026-04-21T10:21:42.063588885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-zv4b9,Uid:fd00ac23-0fb5-4d7f-956d-e123593d4ebc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.066024 kubelet[2508]: E0421 10:21:42.065854 2508 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.066593 kubelet[2508]: E0421 10:21:42.066573 2508 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-zv4b9" Apr 21 10:21:42.066679 kubelet[2508]: E0421 10:21:42.066670 2508 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-zv4b9" Apr 21 10:21:42.066814 kubelet[2508]: E0421 10:21:42.066741 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-zv4b9_calico-system(fd00ac23-0fb5-4d7f-956d-e123593d4ebc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-zv4b9_calico-system(fd00ac23-0fb5-4d7f-956d-e123593d4ebc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-zv4b9" podUID="fd00ac23-0fb5-4d7f-956d-e123593d4ebc" Apr 21 10:21:42.077491 systemd[1]: Started cri-containerd-3989638f7099e68fa6ab58d22fc4794094e7ac9dbde9aee334d063a5cfce5c98.scope - libcontainer container 3989638f7099e68fa6ab58d22fc4794094e7ac9dbde9aee334d063a5cfce5c98. Apr 21 10:21:42.086630 containerd[1457]: time="2026-04-21T10:21:42.086521563Z" level=error msg="Failed to destroy network for sandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.087322 containerd[1457]: time="2026-04-21T10:21:42.087273792Z" level=error msg="encountered an error cleaning up failed sandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.087374 containerd[1457]: time="2026-04-21T10:21:42.087330250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f78d88875-ckm7h,Uid:7fa99074-099e-4ed7-ae7d-e7227f1db188,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.087822 kubelet[2508]: E0421 10:21:42.087477 2508 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.087822 kubelet[2508]: E0421 10:21:42.087518 2508 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f78d88875-ckm7h" Apr 21 10:21:42.087822 kubelet[2508]: E0421 10:21:42.087538 2508 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f78d88875-ckm7h" Apr 21 10:21:42.088026 kubelet[2508]: E0421 10:21:42.087602 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f78d88875-ckm7h_calico-system(7fa99074-099e-4ed7-ae7d-e7227f1db188)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f78d88875-ckm7h_calico-system(7fa99074-099e-4ed7-ae7d-e7227f1db188)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f78d88875-ckm7h" podUID="7fa99074-099e-4ed7-ae7d-e7227f1db188" Apr 21 10:21:42.091319 containerd[1457]: time="2026-04-21T10:21:42.091280829Z" level=error msg="Failed to destroy network for sandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.091629 containerd[1457]: time="2026-04-21T10:21:42.091546705Z" level=error msg="encountered an error cleaning up failed sandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.091629 containerd[1457]: time="2026-04-21T10:21:42.091611602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4jtjk,Uid:6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.091807 kubelet[2508]: E0421 10:21:42.091758 2508 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.091807 kubelet[2508]: E0421 10:21:42.091802 2508 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-4jtjk" Apr 21 10:21:42.091862 kubelet[2508]: E0421 10:21:42.091817 2508 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-4jtjk" Apr 21 10:21:42.091862 kubelet[2508]: E0421 10:21:42.091851 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-4jtjk_kube-system(6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-4jtjk_kube-system(6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-4jtjk" podUID="6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5" Apr 21 10:21:42.094645 containerd[1457]: time="2026-04-21T10:21:42.094491151Z" level=error msg="Failed to destroy network for sandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.095046 containerd[1457]: time="2026-04-21T10:21:42.095025829Z" level=error msg="encountered an error cleaning up failed sandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.095992 containerd[1457]: time="2026-04-21T10:21:42.095970943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wgmqg,Uid:220f4521-b6d9-4c5d-96ae-7597b58ee030,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.096196 kubelet[2508]: E0421 10:21:42.096159 2508 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.096236 kubelet[2508]: E0421 10:21:42.096196 2508 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-wgmqg" Apr 21 10:21:42.096236 kubelet[2508]: E0421 10:21:42.096213 2508 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-wgmqg" Apr 21 10:21:42.096277 kubelet[2508]: E0421 10:21:42.096261 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-wgmqg_kube-system(220f4521-b6d9-4c5d-96ae-7597b58ee030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-wgmqg_kube-system(220f4521-b6d9-4c5d-96ae-7597b58ee030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-wgmqg" podUID="220f4521-b6d9-4c5d-96ae-7597b58ee030" Apr 21 10:21:42.096485 containerd[1457]: time="2026-04-21T10:21:42.096467646Z" level=error msg="Failed to destroy network for sandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.096840 containerd[1457]: time="2026-04-21T10:21:42.096792153Z" level=error msg="encountered an error cleaning up failed sandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.097009 containerd[1457]: time="2026-04-21T10:21:42.096925826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7999b6f797-gkbt6,Uid:386cc7c2-feea-4942-a60b-423727e06d40,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.097098 kubelet[2508]: E0421 10:21:42.097038 2508 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.097098 kubelet[2508]: E0421 10:21:42.097060 2508 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7999b6f797-gkbt6" Apr 21 10:21:42.097098 kubelet[2508]: E0421 10:21:42.097071 2508 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7999b6f797-gkbt6" Apr 21 10:21:42.097303 kubelet[2508]: E0421 10:21:42.097102 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7999b6f797-gkbt6_calico-system(386cc7c2-feea-4942-a60b-423727e06d40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7999b6f797-gkbt6_calico-system(386cc7c2-feea-4942-a60b-423727e06d40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7999b6f797-gkbt6" podUID="386cc7c2-feea-4942-a60b-423727e06d40" Apr 21 10:21:42.112177 containerd[1457]: time="2026-04-21T10:21:42.112148435Z" level=error msg="Failed to destroy network for sandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.112581 containerd[1457]: time="2026-04-21T10:21:42.112495547Z" level=error msg="encountered an error cleaning up failed sandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.112637 containerd[1457]: time="2026-04-21T10:21:42.112532032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7999b6f797-z5ch8,Uid:c70b1305-257f-44b6-ab9b-a0c251378e0f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.113025 kubelet[2508]: E0421 10:21:42.112897 2508 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:21:42.113025 kubelet[2508]: E0421 10:21:42.112981 2508 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7999b6f797-z5ch8" Apr 21 10:21:42.113025 kubelet[2508]: E0421 10:21:42.113010 2508 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7999b6f797-z5ch8" Apr 21 10:21:42.113139 kubelet[2508]: E0421 10:21:42.113093 2508 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7999b6f797-z5ch8_calico-system(c70b1305-257f-44b6-ab9b-a0c251378e0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7999b6f797-z5ch8_calico-system(c70b1305-257f-44b6-ab9b-a0c251378e0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7999b6f797-z5ch8" podUID="c70b1305-257f-44b6-ab9b-a0c251378e0f" Apr 21 10:21:42.118899 containerd[1457]: time="2026-04-21T10:21:42.118848735Z" level=info msg="StartContainer for \"3989638f7099e68fa6ab58d22fc4794094e7ac9dbde9aee334d063a5cfce5c98\" returns successfully" Apr 21 10:21:42.358301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d-shm.mount: Deactivated successfully. Apr 21 10:21:42.782112 kubelet[2508]: I0421 10:21:42.780082 2508 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:21:42.783680 containerd[1457]: time="2026-04-21T10:21:42.781277411Z" level=info msg="StopPodSandbox for \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\"" Apr 21 10:21:42.783680 containerd[1457]: time="2026-04-21T10:21:42.781644902Z" level=info msg="Ensure that sandbox e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203 in task-service has been cleanup successfully" Apr 21 10:21:42.814601 kubelet[2508]: I0421 10:21:42.814442 2508 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:21:42.824050 containerd[1457]: time="2026-04-21T10:21:42.823099332Z" level=info msg="StopPodSandbox for \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\"" Apr 21 10:21:42.824050 containerd[1457]: time="2026-04-21T10:21:42.823583631Z" level=info msg="Ensure that sandbox ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4 in task-service has been cleanup successfully" Apr 21 10:21:42.827259 kubelet[2508]: I0421 10:21:42.827238 2508 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:21:42.829696 containerd[1457]: time="2026-04-21T10:21:42.829673050Z" level=info msg="StopPodSandbox for \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\"" Apr 21 10:21:42.833590 kubelet[2508]: I0421 10:21:42.833071 2508 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:21:42.834731 containerd[1457]: time="2026-04-21T10:21:42.833774264Z" level=info msg="StopPodSandbox for \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\"" Apr 21 10:21:42.843298 containerd[1457]: time="2026-04-21T10:21:42.843070449Z" level=info msg="Ensure that sandbox 9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab in task-service has been cleanup successfully" Apr 21 10:21:42.843682 containerd[1457]: time="2026-04-21T10:21:42.843655631Z" level=info msg="Ensure that sandbox 6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d in task-service has been cleanup successfully" Apr 21 10:21:42.854692 kubelet[2508]: I0421 10:21:42.854402 2508 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:21:42.863442 containerd[1457]: time="2026-04-21T10:21:42.863388494Z" level=info msg="StopPodSandbox for \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\"" Apr 21 10:21:42.871433 containerd[1457]: time="2026-04-21T10:21:42.871295818Z" level=info msg="Ensure that sandbox 8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a in task-service has been cleanup successfully" Apr 21 10:21:42.881605 kubelet[2508]: I0421 10:21:42.881360 2508 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:21:42.885874 kubelet[2508]: E0421 10:21:42.883857 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:42.885970 containerd[1457]: time="2026-04-21T10:21:42.884726006Z" level=info msg="StopPodSandbox for \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\"" Apr 21 10:21:42.887353 containerd[1457]: time="2026-04-21T10:21:42.887140815Z" level=info msg="Ensure that sandbox 870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44 in task-service has been cleanup successfully" Apr 21 10:21:42.925822 kubelet[2508]: I0421 10:21:42.925748 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-gp7w4" podStartSLOduration=2.035602338 podStartE2EDuration="15.925731635s" podCreationTimestamp="2026-04-21 10:21:27 +0000 UTC" firstStartedPulling="2026-04-21 10:21:27.847393661 +0000 UTC m=+18.667742120" lastFinishedPulling="2026-04-21 10:21:41.737522957 +0000 UTC m=+32.557871417" observedRunningTime="2026-04-21 10:21:42.8528757 +0000 UTC m=+33.673224162" watchObservedRunningTime="2026-04-21 10:21:42.925731635 +0000 UTC m=+33.746080105" Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.004 [INFO][3757] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.012 [INFO][3757] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" iface="eth0" netns="/var/run/netns/cni-def7fab6-39f7-9dbc-8988-27ca503e213a" Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.024 [INFO][3757] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" iface="eth0" netns="/var/run/netns/cni-def7fab6-39f7-9dbc-8988-27ca503e213a" Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.025 [INFO][3757] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" iface="eth0" netns="/var/run/netns/cni-def7fab6-39f7-9dbc-8988-27ca503e213a" Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.025 [INFO][3757] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.025 [INFO][3757] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.127 [INFO][3824] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" HandleID="k8s-pod-network.6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.127 [INFO][3824] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.129 [INFO][3824] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.134 [WARNING][3824] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" HandleID="k8s-pod-network.6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.134 [INFO][3824] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" HandleID="k8s-pod-network.6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.136 [INFO][3824] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.141407 containerd[1457]: 2026-04-21 10:21:43.139 [INFO][3757] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:21:43.141748 containerd[1457]: time="2026-04-21T10:21:43.141590475Z" level=info msg="TearDown network for sandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\" successfully" Apr 21 10:21:43.141748 containerd[1457]: time="2026-04-21T10:21:43.141676380Z" level=info msg="StopPodSandbox for \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\" returns successfully" Apr 21 10:21:43.143792 systemd[1]: run-netns-cni\x2ddef7fab6\x2d39f7\x2d9dbc\x2d8988\x2d27ca503e213a.mount: Deactivated successfully. Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.012 [INFO][3803] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.024 [INFO][3803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" iface="eth0" netns="/var/run/netns/cni-1a3dfcb2-34e6-c3fa-e8bb-ccb322c80890" Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.024 [INFO][3803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" iface="eth0" netns="/var/run/netns/cni-1a3dfcb2-34e6-c3fa-e8bb-ccb322c80890" Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.025 [INFO][3803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" iface="eth0" netns="/var/run/netns/cni-1a3dfcb2-34e6-c3fa-e8bb-ccb322c80890" Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.025 [INFO][3803] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.025 [INFO][3803] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.121 [INFO][3823] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" HandleID="k8s-pod-network.870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Workload="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.121 [INFO][3823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.121 [INFO][3823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.127 [WARNING][3823] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" HandleID="k8s-pod-network.870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Workload="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.127 [INFO][3823] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" HandleID="k8s-pod-network.870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Workload="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.128 [INFO][3823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.147022 containerd[1457]: 2026-04-21 10:21:43.144 [INFO][3803] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:21:43.147461 containerd[1457]: time="2026-04-21T10:21:43.147440123Z" level=info msg="TearDown network for sandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\" successfully" Apr 21 10:21:43.147531 containerd[1457]: time="2026-04-21T10:21:43.147500479Z" level=info msg="StopPodSandbox for \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\" returns successfully" Apr 21 10:21:43.156451 kubelet[2508]: E0421 10:21:43.154373 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:43.155265 systemd[1]: run-netns-cni\x2d1a3dfcb2\x2d34e6\x2dc3fa\x2de8bb\x2dccb322c80890.mount: Deactivated successfully. Apr 21 10:21:43.160492 containerd[1457]: time="2026-04-21T10:21:43.159966208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4jtjk,Uid:6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5,Namespace:kube-system,Attempt:1,}" Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.049 [INFO][3758] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.052 [INFO][3758] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" iface="eth0" netns="/var/run/netns/cni-f7406c6e-8458-cf92-976e-3dfbdffd5c93" Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.053 [INFO][3758] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" iface="eth0" netns="/var/run/netns/cni-f7406c6e-8458-cf92-976e-3dfbdffd5c93" Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.055 [INFO][3758] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" iface="eth0" netns="/var/run/netns/cni-f7406c6e-8458-cf92-976e-3dfbdffd5c93" Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.055 [INFO][3758] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.056 [INFO][3758] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.130 [INFO][3837] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" HandleID="k8s-pod-network.ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.130 [INFO][3837] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.136 [INFO][3837] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.144 [WARNING][3837] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" HandleID="k8s-pod-network.ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.144 [INFO][3837] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" HandleID="k8s-pod-network.ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.146 [INFO][3837] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.170158 containerd[1457]: 2026-04-21 10:21:43.159 [INFO][3758] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:21:43.172690 containerd[1457]: time="2026-04-21T10:21:43.171612740Z" level=info msg="TearDown network for sandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\" successfully" Apr 21 10:21:43.172690 containerd[1457]: time="2026-04-21T10:21:43.171639039Z" level=info msg="StopPodSandbox for \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\" returns successfully" Apr 21 10:21:43.172031 systemd[1]: run-netns-cni\x2df7406c6e\x2d8458\x2dcf92\x2d976e\x2d3dfbdffd5c93.mount: Deactivated successfully. Apr 21 10:21:43.174332 containerd[1457]: time="2026-04-21T10:21:43.174084041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7999b6f797-z5ch8,Uid:c70b1305-257f-44b6-ab9b-a0c251378e0f,Namespace:calico-system,Attempt:1,}" Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.001 [INFO][3719] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.001 [INFO][3719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" iface="eth0" netns="/var/run/netns/cni-2f642a40-9f13-90d5-dfc9-602f381be966" Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.002 [INFO][3719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" iface="eth0" netns="/var/run/netns/cni-2f642a40-9f13-90d5-dfc9-602f381be966" Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.012 [INFO][3719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" iface="eth0" netns="/var/run/netns/cni-2f642a40-9f13-90d5-dfc9-602f381be966" Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.012 [INFO][3719] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.012 [INFO][3719] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.139 [INFO][3819] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" HandleID="k8s-pod-network.e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.139 [INFO][3819] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.146 [INFO][3819] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.167 [WARNING][3819] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" HandleID="k8s-pod-network.e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.167 [INFO][3819] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" HandleID="k8s-pod-network.e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.169 [INFO][3819] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.187804 containerd[1457]: 2026-04-21 10:21:43.173 [INFO][3719] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:21:43.187804 containerd[1457]: time="2026-04-21T10:21:43.185618086Z" level=info msg="TearDown network for sandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\" successfully" Apr 21 10:21:43.187804 containerd[1457]: time="2026-04-21T10:21:43.185635974Z" level=info msg="StopPodSandbox for \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\" returns successfully" Apr 21 10:21:43.197076 containerd[1457]: time="2026-04-21T10:21:43.196673817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7999b6f797-gkbt6,Uid:386cc7c2-feea-4942-a60b-423727e06d40,Namespace:calico-system,Attempt:1,}" Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.076 [INFO][3793] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.082 [INFO][3793] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" iface="eth0" netns="/var/run/netns/cni-0890ffef-52ef-9576-eed2-88aa20d75263" Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.082 [INFO][3793] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" iface="eth0" netns="/var/run/netns/cni-0890ffef-52ef-9576-eed2-88aa20d75263" Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.083 [INFO][3793] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" iface="eth0" netns="/var/run/netns/cni-0890ffef-52ef-9576-eed2-88aa20d75263" Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.083 [INFO][3793] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.083 [INFO][3793] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.181 [INFO][3845] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" HandleID="k8s-pod-network.8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.182 [INFO][3845] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.182 [INFO][3845] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.190 [WARNING][3845] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" HandleID="k8s-pod-network.8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.190 [INFO][3845] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" HandleID="k8s-pod-network.8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.201 [INFO][3845] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.206071 containerd[1457]: 2026-04-21 10:21:43.203 [INFO][3793] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:21:43.206071 containerd[1457]: time="2026-04-21T10:21:43.205057662Z" level=info msg="TearDown network for sandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\" successfully" Apr 21 10:21:43.206071 containerd[1457]: time="2026-04-21T10:21:43.205103935Z" level=info msg="StopPodSandbox for \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\" returns successfully" Apr 21 10:21:43.215594 containerd[1457]: time="2026-04-21T10:21:43.215034686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-zv4b9,Uid:fd00ac23-0fb5-4d7f-956d-e123593d4ebc,Namespace:calico-system,Attempt:1,}" Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.076 [INFO][3766] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.080 [INFO][3766] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" iface="eth0" netns="/var/run/netns/cni-720ecd6b-ebff-ad00-73a8-e23201e01dfa" Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.083 [INFO][3766] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" iface="eth0" netns="/var/run/netns/cni-720ecd6b-ebff-ad00-73a8-e23201e01dfa" Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.085 [INFO][3766] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" iface="eth0" netns="/var/run/netns/cni-720ecd6b-ebff-ad00-73a8-e23201e01dfa" Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.085 [INFO][3766] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.085 [INFO][3766] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.201 [INFO][3847] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" HandleID="k8s-pod-network.9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.204 [INFO][3847] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.204 [INFO][3847] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.221 [WARNING][3847] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" HandleID="k8s-pod-network.9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.221 [INFO][3847] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" HandleID="k8s-pod-network.9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.223 [INFO][3847] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.228837 containerd[1457]: 2026-04-21 10:21:43.226 [INFO][3766] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:21:43.229368 containerd[1457]: time="2026-04-21T10:21:43.228973658Z" level=info msg="TearDown network for sandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\" successfully" Apr 21 10:21:43.229368 containerd[1457]: time="2026-04-21T10:21:43.228993978Z" level=info msg="StopPodSandbox for \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\" returns successfully" Apr 21 10:21:43.237521 kubelet[2508]: E0421 10:21:43.232949 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:43.240466 containerd[1457]: time="2026-04-21T10:21:43.239803122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wgmqg,Uid:220f4521-b6d9-4c5d-96ae-7597b58ee030,Namespace:kube-system,Attempt:1,}" Apr 21 10:21:43.308378 kubelet[2508]: I0421 10:21:43.308354 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/7fa99074-099e-4ed7-ae7d-e7227f1db188-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7fa99074-099e-4ed7-ae7d-e7227f1db188-whisker-backend-key-pair\") pod \"7fa99074-099e-4ed7-ae7d-e7227f1db188\" (UID: \"7fa99074-099e-4ed7-ae7d-e7227f1db188\") " Apr 21 10:21:43.308647 kubelet[2508]: I0421 10:21:43.308598 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/7fa99074-099e-4ed7-ae7d-e7227f1db188-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fa99074-099e-4ed7-ae7d-e7227f1db188-whisker-ca-bundle\") pod \"7fa99074-099e-4ed7-ae7d-e7227f1db188\" (UID: \"7fa99074-099e-4ed7-ae7d-e7227f1db188\") " Apr 21 10:21:43.308837 kubelet[2508]: I0421 10:21:43.308825 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/7fa99074-099e-4ed7-ae7d-e7227f1db188-nginx-config\" (UniqueName: \"kubernetes.io/configmap/7fa99074-099e-4ed7-ae7d-e7227f1db188-nginx-config\") pod \"7fa99074-099e-4ed7-ae7d-e7227f1db188\" (UID: \"7fa99074-099e-4ed7-ae7d-e7227f1db188\") " Apr 21 10:21:43.308896 kubelet[2508]: I0421 10:21:43.308889 2508 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/7fa99074-099e-4ed7-ae7d-e7227f1db188-kube-api-access-29r6x\" (UniqueName: \"kubernetes.io/projected/7fa99074-099e-4ed7-ae7d-e7227f1db188-kube-api-access-29r6x\") pod \"7fa99074-099e-4ed7-ae7d-e7227f1db188\" (UID: \"7fa99074-099e-4ed7-ae7d-e7227f1db188\") " Apr 21 10:21:43.309318 kubelet[2508]: I0421 10:21:43.309254 2508 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa99074-099e-4ed7-ae7d-e7227f1db188-whisker-ca-bundle" pod "7fa99074-099e-4ed7-ae7d-e7227f1db188" (UID: "7fa99074-099e-4ed7-ae7d-e7227f1db188"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:21:43.309489 kubelet[2508]: I0421 10:21:43.309447 2508 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa99074-099e-4ed7-ae7d-e7227f1db188-nginx-config" pod "7fa99074-099e-4ed7-ae7d-e7227f1db188" (UID: "7fa99074-099e-4ed7-ae7d-e7227f1db188"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:21:43.317557 kubelet[2508]: I0421 10:21:43.317527 2508 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa99074-099e-4ed7-ae7d-e7227f1db188-kube-api-access-29r6x" pod "7fa99074-099e-4ed7-ae7d-e7227f1db188" (UID: "7fa99074-099e-4ed7-ae7d-e7227f1db188"). InnerVolumeSpecName "kube-api-access-29r6x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:21:43.318100 kubelet[2508]: I0421 10:21:43.318052 2508 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa99074-099e-4ed7-ae7d-e7227f1db188-whisker-backend-key-pair" pod "7fa99074-099e-4ed7-ae7d-e7227f1db188" (UID: "7fa99074-099e-4ed7-ae7d-e7227f1db188"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:21:43.327164 systemd[1]: Removed slice kubepods-besteffort-pod7fa99074_099e_4ed7_ae7d_e7227f1db188.slice - libcontainer container kubepods-besteffort-pod7fa99074_099e_4ed7_ae7d_e7227f1db188.slice. Apr 21 10:21:43.357114 systemd[1]: run-netns-cni\x2d2f642a40\x2d9f13\x2d90d5\x2ddfc9\x2d602f381be966.mount: Deactivated successfully. Apr 21 10:21:43.357183 systemd[1]: run-netns-cni\x2d720ecd6b\x2debff\x2dad00\x2d73a8\x2de23201e01dfa.mount: Deactivated successfully. Apr 21 10:21:43.357219 systemd[1]: run-netns-cni\x2d0890ffef\x2d52ef\x2d9576\x2deed2\x2d88aa20d75263.mount: Deactivated successfully. Apr 21 10:21:43.357257 systemd[1]: var-lib-kubelet-pods-7fa99074\x2d099e\x2d4ed7\x2dae7d\x2de7227f1db188-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d29r6x.mount: Deactivated successfully. Apr 21 10:21:43.357299 systemd[1]: var-lib-kubelet-pods-7fa99074\x2d099e\x2d4ed7\x2dae7d\x2de7227f1db188-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:21:43.410393 kubelet[2508]: I0421 10:21:43.410340 2508 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-29r6x\" (UniqueName: \"kubernetes.io/projected/7fa99074-099e-4ed7-ae7d-e7227f1db188-kube-api-access-29r6x\") on node \"localhost\" DevicePath \"\"" Apr 21 10:21:43.410393 kubelet[2508]: I0421 10:21:43.410363 2508 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7fa99074-099e-4ed7-ae7d-e7227f1db188-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 21 10:21:43.410393 kubelet[2508]: I0421 10:21:43.410370 2508 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fa99074-099e-4ed7-ae7d-e7227f1db188-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 21 10:21:43.410393 kubelet[2508]: I0421 10:21:43.410376 2508 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7fa99074-099e-4ed7-ae7d-e7227f1db188-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 21 10:21:43.460749 systemd-networkd[1383]: cali49e47f3b705: Link UP Apr 21 10:21:43.460933 systemd-networkd[1383]: cali49e47f3b705: Gained carrier Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.276 [ERROR][3889] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.307 [INFO][3889] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0 calico-apiserver-7999b6f797- calico-system c70b1305-257f-44b6-ab9b-a0c251378e0f 894 0 2026-04-21 10:21:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7999b6f797 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7999b6f797-z5ch8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali49e47f3b705 [] [] }} ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-z5ch8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.307 [INFO][3889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-z5ch8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.366 [INFO][3951] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" HandleID="k8s-pod-network.59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.380 [INFO][3951] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" HandleID="k8s-pod-network.59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7999b6f797-z5ch8", "timestamp":"2026-04-21 10:21:43.366769917 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000453a20)} Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.380 [INFO][3951] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.380 [INFO][3951] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.380 [INFO][3951] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.397 [INFO][3951] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" host="localhost" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.402 [INFO][3951] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.408 [INFO][3951] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.410 [INFO][3951] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.411 [INFO][3951] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.411 [INFO][3951] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" host="localhost" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.413 [INFO][3951] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3 Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.418 [INFO][3951] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" host="localhost" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.443 [INFO][3951] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" host="localhost" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.443 [INFO][3951] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" host="localhost" Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.444 [INFO][3951] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.471973 containerd[1457]: 2026-04-21 10:21:43.444 [INFO][3951] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" HandleID="k8s-pod-network.59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:21:43.472518 containerd[1457]: 2026-04-21 10:21:43.449 [INFO][3889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-z5ch8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0", GenerateName:"calico-apiserver-7999b6f797-", Namespace:"calico-system", SelfLink:"", UID:"c70b1305-257f-44b6-ab9b-a0c251378e0f", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7999b6f797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7999b6f797-z5ch8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali49e47f3b705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:43.472518 containerd[1457]: 2026-04-21 10:21:43.449 [INFO][3889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-z5ch8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:21:43.472518 containerd[1457]: 2026-04-21 10:21:43.449 [INFO][3889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49e47f3b705 ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-z5ch8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:21:43.472518 containerd[1457]: 2026-04-21 10:21:43.459 [INFO][3889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-z5ch8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:21:43.472518 containerd[1457]: 2026-04-21 10:21:43.460 [INFO][3889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-z5ch8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0", GenerateName:"calico-apiserver-7999b6f797-", Namespace:"calico-system", SelfLink:"", UID:"c70b1305-257f-44b6-ab9b-a0c251378e0f", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7999b6f797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3", Pod:"calico-apiserver-7999b6f797-z5ch8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali49e47f3b705", MAC:"e6:77:b7:ce:14:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:43.472518 containerd[1457]: 2026-04-21 10:21:43.468 [INFO][3889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-z5ch8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:21:43.489224 containerd[1457]: time="2026-04-21T10:21:43.488959135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:43.489224 containerd[1457]: time="2026-04-21T10:21:43.489012326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:43.489224 containerd[1457]: time="2026-04-21T10:21:43.489069981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:43.489224 containerd[1457]: time="2026-04-21T10:21:43.489150761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:43.511272 systemd[1]: Started cri-containerd-59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3.scope - libcontainer container 59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3. Apr 21 10:21:43.522639 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:21:43.529349 systemd-networkd[1383]: cali4fe7b90b52e: Link UP Apr 21 10:21:43.530082 systemd-networkd[1383]: cali4fe7b90b52e: Gained carrier Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.238 [ERROR][3875] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.305 [INFO][3875] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--4jtjk-eth0 coredns-7d764666f9- kube-system 6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5 893 0 2026-04-21 10:21:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-4jtjk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4fe7b90b52e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Namespace="kube-system" Pod="coredns-7d764666f9-4jtjk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4jtjk-" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.306 [INFO][3875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Namespace="kube-system" Pod="coredns-7d764666f9-4jtjk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.383 [INFO][3964] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" HandleID="k8s-pod-network.46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.399 [INFO][3964] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" HandleID="k8s-pod-network.46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e320), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-4jtjk", "timestamp":"2026-04-21 10:21:43.383283782 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000294c60)} Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.399 [INFO][3964] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.444 [INFO][3964] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.445 [INFO][3964] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.496 [INFO][3964] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" host="localhost" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.503 [INFO][3964] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.508 [INFO][3964] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.510 [INFO][3964] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.512 [INFO][3964] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.512 [INFO][3964] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" host="localhost" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.513 [INFO][3964] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241 Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.517 [INFO][3964] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" host="localhost" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.522 [INFO][3964] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" host="localhost" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.522 [INFO][3964] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" host="localhost" Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.522 [INFO][3964] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.542567 containerd[1457]: 2026-04-21 10:21:43.523 [INFO][3964] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" HandleID="k8s-pod-network.46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:21:43.543007 containerd[1457]: 2026-04-21 10:21:43.525 [INFO][3875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Namespace="kube-system" Pod="coredns-7d764666f9-4jtjk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--4jtjk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-4jtjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fe7b90b52e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:43.543007 containerd[1457]: 2026-04-21 10:21:43.525 [INFO][3875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Namespace="kube-system" Pod="coredns-7d764666f9-4jtjk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:21:43.543007 containerd[1457]: 2026-04-21 10:21:43.525 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fe7b90b52e ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Namespace="kube-system" Pod="coredns-7d764666f9-4jtjk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:21:43.543007 containerd[1457]: 2026-04-21 10:21:43.529 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Namespace="kube-system" Pod="coredns-7d764666f9-4jtjk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:21:43.543007 containerd[1457]: 2026-04-21 10:21:43.530 [INFO][3875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Namespace="kube-system" Pod="coredns-7d764666f9-4jtjk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--4jtjk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241", Pod:"coredns-7d764666f9-4jtjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fe7b90b52e", MAC:"8e:0c:53:7c:c6:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:43.543007 containerd[1457]: 2026-04-21 10:21:43.540 [INFO][3875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241" Namespace="kube-system" Pod="coredns-7d764666f9-4jtjk" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:21:43.555654 containerd[1457]: time="2026-04-21T10:21:43.555602620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7999b6f797-z5ch8,Uid:c70b1305-257f-44b6-ab9b-a0c251378e0f,Namespace:calico-system,Attempt:1,} returns sandbox id \"59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3\"" Apr 21 10:21:43.557437 containerd[1457]: time="2026-04-21T10:21:43.557394719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:21:43.560511 containerd[1457]: time="2026-04-21T10:21:43.560377467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:43.560511 containerd[1457]: time="2026-04-21T10:21:43.560429996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:43.560511 containerd[1457]: time="2026-04-21T10:21:43.560446848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:43.560642 containerd[1457]: time="2026-04-21T10:21:43.560539045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:43.577063 systemd[1]: Started cri-containerd-46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241.scope - libcontainer container 46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241. Apr 21 10:21:43.584985 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:21:43.608573 containerd[1457]: time="2026-04-21T10:21:43.608515817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-4jtjk,Uid:6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5,Namespace:kube-system,Attempt:1,} returns sandbox id \"46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241\"" Apr 21 10:21:43.609595 kubelet[2508]: E0421 10:21:43.609544 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:43.620363 containerd[1457]: time="2026-04-21T10:21:43.620253995Z" level=info msg="CreateContainer within sandbox \"46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:21:43.652126 containerd[1457]: time="2026-04-21T10:21:43.651931184Z" level=info msg="CreateContainer within sandbox \"46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d1a258256cef7394748ce0224b7b938ce519e025241f32d5daf1594b055e30e\"" Apr 21 10:21:43.657347 containerd[1457]: time="2026-04-21T10:21:43.655612099Z" level=info msg="StartContainer for \"4d1a258256cef7394748ce0224b7b938ce519e025241f32d5daf1594b055e30e\"" Apr 21 10:21:43.668985 systemd-networkd[1383]: calidaf837d9e9e: Link UP Apr 21 10:21:43.669208 systemd-networkd[1383]: calidaf837d9e9e: Gained carrier Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.304 [ERROR][3907] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.324 [INFO][3907] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0 goldmane-9f7667bb8- calico-system fd00ac23-0fb5-4d7f-956d-e123593d4ebc 896 0 2026-04-21 10:21:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-zv4b9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidaf837d9e9e [] [] }} ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Namespace="calico-system" Pod="goldmane-9f7667bb8-zv4b9" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--zv4b9-" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.324 [INFO][3907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Namespace="calico-system" Pod="goldmane-9f7667bb8-zv4b9" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.397 [INFO][3957] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" HandleID="k8s-pod-network.652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.407 [INFO][3957] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" HandleID="k8s-pod-network.652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f6570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-zv4b9", "timestamp":"2026-04-21 10:21:43.397548885 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000384420)} Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.407 [INFO][3957] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.522 [INFO][3957] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.522 [INFO][3957] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.596 [INFO][3957] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" host="localhost" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.604 [INFO][3957] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.609 [INFO][3957] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.611 [INFO][3957] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.625 [INFO][3957] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.626 [INFO][3957] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" host="localhost" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.627 [INFO][3957] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530 Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.655 [INFO][3957] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" host="localhost" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.662 [INFO][3957] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" host="localhost" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.662 [INFO][3957] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" host="localhost" Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.662 [INFO][3957] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.685613 containerd[1457]: 2026-04-21 10:21:43.662 [INFO][3957] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" HandleID="k8s-pod-network.652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:21:43.686083 containerd[1457]: 2026-04-21 10:21:43.666 [INFO][3907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Namespace="calico-system" Pod="goldmane-9f7667bb8-zv4b9" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"fd00ac23-0fb5-4d7f-956d-e123593d4ebc", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-zv4b9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidaf837d9e9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:43.686083 containerd[1457]: 2026-04-21 10:21:43.666 [INFO][3907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Namespace="calico-system" Pod="goldmane-9f7667bb8-zv4b9" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:21:43.686083 containerd[1457]: 2026-04-21 10:21:43.666 [INFO][3907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaf837d9e9e ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Namespace="calico-system" Pod="goldmane-9f7667bb8-zv4b9" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:21:43.686083 containerd[1457]: 2026-04-21 10:21:43.668 [INFO][3907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Namespace="calico-system" Pod="goldmane-9f7667bb8-zv4b9" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:21:43.686083 containerd[1457]: 2026-04-21 10:21:43.668 [INFO][3907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Namespace="calico-system" Pod="goldmane-9f7667bb8-zv4b9" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"fd00ac23-0fb5-4d7f-956d-e123593d4ebc", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530", Pod:"goldmane-9f7667bb8-zv4b9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidaf837d9e9e", MAC:"32:58:5a:35:06:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:43.686083 containerd[1457]: 2026-04-21 10:21:43.684 [INFO][3907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530" Namespace="calico-system" Pod="goldmane-9f7667bb8-zv4b9" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:21:43.686073 systemd[1]: Started cri-containerd-4d1a258256cef7394748ce0224b7b938ce519e025241f32d5daf1594b055e30e.scope - libcontainer container 4d1a258256cef7394748ce0224b7b938ce519e025241f32d5daf1594b055e30e. Apr 21 10:21:43.704037 containerd[1457]: time="2026-04-21T10:21:43.703962999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:43.704126 containerd[1457]: time="2026-04-21T10:21:43.704044217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:43.704126 containerd[1457]: time="2026-04-21T10:21:43.704062410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:43.704195 containerd[1457]: time="2026-04-21T10:21:43.704142737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:43.711653 containerd[1457]: time="2026-04-21T10:21:43.711615764Z" level=info msg="StartContainer for \"4d1a258256cef7394748ce0224b7b938ce519e025241f32d5daf1594b055e30e\" returns successfully" Apr 21 10:21:43.729065 systemd[1]: Started cri-containerd-652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530.scope - libcontainer container 652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530. Apr 21 10:21:43.750381 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:21:43.764434 systemd-networkd[1383]: cali4f4a927720e: Link UP Apr 21 10:21:43.764984 systemd-networkd[1383]: cali4f4a927720e: Gained carrier Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.305 [ERROR][3917] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.333 [INFO][3917] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0 calico-apiserver-7999b6f797- calico-system 386cc7c2-feea-4942-a60b-423727e06d40 891 0 2026-04-21 10:21:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7999b6f797 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7999b6f797-gkbt6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4f4a927720e [] [] }} ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-gkbt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.333 [INFO][3917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-gkbt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.407 [INFO][3970] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" HandleID="k8s-pod-network.82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.417 [INFO][3970] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" HandleID="k8s-pod-network.82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b9840), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7999b6f797-gkbt6", "timestamp":"2026-04-21 10:21:43.40769766 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001c02c0)} Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.417 [INFO][3970] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.662 [INFO][3970] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.664 [INFO][3970] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.696 [INFO][3970] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" host="localhost" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.703 [INFO][3970] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.710 [INFO][3970] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.712 [INFO][3970] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.714 [INFO][3970] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.714 [INFO][3970] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" host="localhost" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.725 [INFO][3970] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215 Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.737 [INFO][3970] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" host="localhost" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.755 [INFO][3970] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" host="localhost" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.755 [INFO][3970] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" host="localhost" Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.756 [INFO][3970] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.788937 containerd[1457]: 2026-04-21 10:21:43.757 [INFO][3970] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" HandleID="k8s-pod-network.82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:21:43.790078 containerd[1457]: 2026-04-21 10:21:43.759 [INFO][3917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-gkbt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0", GenerateName:"calico-apiserver-7999b6f797-", Namespace:"calico-system", SelfLink:"", UID:"386cc7c2-feea-4942-a60b-423727e06d40", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7999b6f797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7999b6f797-gkbt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4f4a927720e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:43.790078 containerd[1457]: 2026-04-21 10:21:43.759 [INFO][3917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-gkbt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:21:43.790078 containerd[1457]: 2026-04-21 10:21:43.759 [INFO][3917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f4a927720e ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-gkbt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:21:43.790078 containerd[1457]: 2026-04-21 10:21:43.765 [INFO][3917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-gkbt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:21:43.790078 containerd[1457]: 2026-04-21 10:21:43.768 [INFO][3917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-gkbt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0", GenerateName:"calico-apiserver-7999b6f797-", Namespace:"calico-system", SelfLink:"", UID:"386cc7c2-feea-4942-a60b-423727e06d40", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7999b6f797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215", Pod:"calico-apiserver-7999b6f797-gkbt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4f4a927720e", MAC:"5e:39:72:48:3a:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:43.790078 containerd[1457]: 2026-04-21 10:21:43.782 [INFO][3917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215" Namespace="calico-system" Pod="calico-apiserver-7999b6f797-gkbt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:21:43.790078 containerd[1457]: time="2026-04-21T10:21:43.789588254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-zv4b9,Uid:fd00ac23-0fb5-4d7f-956d-e123593d4ebc,Namespace:calico-system,Attempt:1,} returns sandbox id \"652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530\"" Apr 21 10:21:43.807761 containerd[1457]: time="2026-04-21T10:21:43.807672704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:43.807761 containerd[1457]: time="2026-04-21T10:21:43.807714600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:43.807761 containerd[1457]: time="2026-04-21T10:21:43.807722588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:43.808092 containerd[1457]: time="2026-04-21T10:21:43.807826006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:43.834065 systemd[1]: Started cri-containerd-82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215.scope - libcontainer container 82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215. Apr 21 10:21:43.848248 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:21:43.853496 systemd-networkd[1383]: cali7472f81b665: Link UP Apr 21 10:21:43.854258 systemd-networkd[1383]: cali7472f81b665: Gained carrier Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.364 [ERROR][3935] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.399 [INFO][3935] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--wgmqg-eth0 coredns-7d764666f9- kube-system 220f4521-b6d9-4c5d-96ae-7597b58ee030 895 0 2026-04-21 10:21:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-wgmqg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7472f81b665 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Namespace="kube-system" Pod="coredns-7d764666f9-wgmqg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wgmqg-" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.399 [INFO][3935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Namespace="kube-system" Pod="coredns-7d764666f9-wgmqg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.453 [INFO][3986] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" HandleID="k8s-pod-network.51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.461 [INFO][3986] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" HandleID="k8s-pod-network.51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034eec0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-wgmqg", "timestamp":"2026-04-21 10:21:43.453831212 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0007b2000)} Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.461 [INFO][3986] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.755 [INFO][3986] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.755 [INFO][3986] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.796 [INFO][3986] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" host="localhost" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.806 [INFO][3986] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.811 [INFO][3986] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.828 [INFO][3986] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.833 [INFO][3986] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.833 [INFO][3986] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" host="localhost" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.834 [INFO][3986] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.839 [INFO][3986] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" host="localhost" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.847 [INFO][3986] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" host="localhost" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.847 [INFO][3986] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" host="localhost" Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.847 [INFO][3986] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:43.865510 containerd[1457]: 2026-04-21 10:21:43.848 [INFO][3986] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" HandleID="k8s-pod-network.51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:21:43.865937 containerd[1457]: 2026-04-21 10:21:43.851 [INFO][3935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Namespace="kube-system" Pod="coredns-7d764666f9-wgmqg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wgmqg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"220f4521-b6d9-4c5d-96ae-7597b58ee030", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-wgmqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7472f81b665", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:43.865937 containerd[1457]: 2026-04-21 10:21:43.851 [INFO][3935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Namespace="kube-system" Pod="coredns-7d764666f9-wgmqg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:21:43.865937 containerd[1457]: 2026-04-21 10:21:43.851 [INFO][3935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7472f81b665 ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Namespace="kube-system" Pod="coredns-7d764666f9-wgmqg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:21:43.865937 containerd[1457]: 2026-04-21 10:21:43.854 [INFO][3935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Namespace="kube-system" Pod="coredns-7d764666f9-wgmqg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:21:43.865937 containerd[1457]: 2026-04-21 10:21:43.855 [INFO][3935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Namespace="kube-system" Pod="coredns-7d764666f9-wgmqg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wgmqg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"220f4521-b6d9-4c5d-96ae-7597b58ee030", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e", Pod:"coredns-7d764666f9-wgmqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7472f81b665", MAC:"ae:ff:4b:57:4d:3f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:43.865937 containerd[1457]: 2026-04-21 10:21:43.862 [INFO][3935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e" Namespace="kube-system" Pod="coredns-7d764666f9-wgmqg" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:21:43.879018 containerd[1457]: time="2026-04-21T10:21:43.878992893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7999b6f797-gkbt6,Uid:386cc7c2-feea-4942-a60b-423727e06d40,Namespace:calico-system,Attempt:1,} returns sandbox id \"82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215\"" Apr 21 10:21:43.881307 kubelet[2508]: E0421 10:21:43.881291 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:43.882614 containerd[1457]: time="2026-04-21T10:21:43.882489168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:43.882614 containerd[1457]: time="2026-04-21T10:21:43.882534085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:43.882614 containerd[1457]: time="2026-04-21T10:21:43.882545468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:43.882614 containerd[1457]: time="2026-04-21T10:21:43.882594370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:43.899574 kubelet[2508]: I0421 10:21:43.899518 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-4jtjk" podStartSLOduration=27.899458911 podStartE2EDuration="27.899458911s" podCreationTimestamp="2026-04-21 10:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:21:43.892380876 +0000 UTC m=+34.712729346" watchObservedRunningTime="2026-04-21 10:21:43.899458911 +0000 UTC m=+34.719807378" Apr 21 10:21:43.900096 systemd[1]: Started cri-containerd-51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e.scope - libcontainer container 51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e. Apr 21 10:21:43.923577 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:21:43.977394 containerd[1457]: time="2026-04-21T10:21:43.977342234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wgmqg,Uid:220f4521-b6d9-4c5d-96ae-7597b58ee030,Namespace:kube-system,Attempt:1,} returns sandbox id \"51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e\"" Apr 21 10:21:43.980239 kubelet[2508]: E0421 10:21:43.980183 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:43.986829 containerd[1457]: time="2026-04-21T10:21:43.986780587Z" level=info msg="CreateContainer within sandbox \"51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:21:43.988309 systemd[1]: Created slice kubepods-besteffort-pode20ed5d9_8fb7_41be_977b_71a41354d15e.slice - libcontainer container kubepods-besteffort-pode20ed5d9_8fb7_41be_977b_71a41354d15e.slice. Apr 21 10:21:44.012895 containerd[1457]: time="2026-04-21T10:21:44.012725481Z" level=info msg="CreateContainer within sandbox \"51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d1662d05b344bb005c1b07e5a46414593d575583eda18791aac89a962942b0b\"" Apr 21 10:21:44.020961 containerd[1457]: time="2026-04-21T10:21:44.019614482Z" level=info msg="StartContainer for \"0d1662d05b344bb005c1b07e5a46414593d575583eda18791aac89a962942b0b\"" Apr 21 10:21:44.052125 systemd[1]: Started cri-containerd-0d1662d05b344bb005c1b07e5a46414593d575583eda18791aac89a962942b0b.scope - libcontainer container 0d1662d05b344bb005c1b07e5a46414593d575583eda18791aac89a962942b0b. Apr 21 10:21:44.073358 containerd[1457]: time="2026-04-21T10:21:44.073308555Z" level=info msg="StartContainer for \"0d1662d05b344bb005c1b07e5a46414593d575583eda18791aac89a962942b0b\" returns successfully" Apr 21 10:21:44.118576 kubelet[2508]: I0421 10:21:44.118393 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e20ed5d9-8fb7-41be-977b-71a41354d15e-nginx-config\") pod \"whisker-c59bb69d4-khhgm\" (UID: \"e20ed5d9-8fb7-41be-977b-71a41354d15e\") " pod="calico-system/whisker-c59bb69d4-khhgm" Apr 21 10:21:44.119952 kubelet[2508]: I0421 10:21:44.119921 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e20ed5d9-8fb7-41be-977b-71a41354d15e-whisker-ca-bundle\") pod \"whisker-c59bb69d4-khhgm\" (UID: \"e20ed5d9-8fb7-41be-977b-71a41354d15e\") " pod="calico-system/whisker-c59bb69d4-khhgm" Apr 21 10:21:44.120146 kubelet[2508]: I0421 10:21:44.120008 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dktvs\" (UniqueName: \"kubernetes.io/projected/e20ed5d9-8fb7-41be-977b-71a41354d15e-kube-api-access-dktvs\") pod \"whisker-c59bb69d4-khhgm\" (UID: \"e20ed5d9-8fb7-41be-977b-71a41354d15e\") " pod="calico-system/whisker-c59bb69d4-khhgm" Apr 21 10:21:44.120181 kubelet[2508]: I0421 10:21:44.120153 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e20ed5d9-8fb7-41be-977b-71a41354d15e-whisker-backend-key-pair\") pod \"whisker-c59bb69d4-khhgm\" (UID: \"e20ed5d9-8fb7-41be-977b-71a41354d15e\") " pod="calico-system/whisker-c59bb69d4-khhgm" Apr 21 10:21:44.298964 containerd[1457]: time="2026-04-21T10:21:44.297810270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c59bb69d4-khhgm,Uid:e20ed5d9-8fb7-41be-977b-71a41354d15e,Namespace:calico-system,Attempt:0,}" Apr 21 10:21:44.607537 systemd-networkd[1383]: calidfd44e78b8c: Link UP Apr 21 10:21:44.608065 systemd-networkd[1383]: calidfd44e78b8c: Gained carrier Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.412 [ERROR][4445] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.422 [INFO][4445] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--c59bb69d4--khhgm-eth0 whisker-c59bb69d4- calico-system e20ed5d9-8fb7-41be-977b-71a41354d15e 954 0 2026-04-21 10:21:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c59bb69d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-c59bb69d4-khhgm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidfd44e78b8c [] [] }} ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Namespace="calico-system" Pod="whisker-c59bb69d4-khhgm" WorkloadEndpoint="localhost-k8s-whisker--c59bb69d4--khhgm-" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.422 [INFO][4445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Namespace="calico-system" Pod="whisker-c59bb69d4-khhgm" WorkloadEndpoint="localhost-k8s-whisker--c59bb69d4--khhgm-eth0" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.485 [INFO][4464] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" HandleID="k8s-pod-network.6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Workload="localhost-k8s-whisker--c59bb69d4--khhgm-eth0" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.506 [INFO][4464] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" HandleID="k8s-pod-network.6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Workload="localhost-k8s-whisker--c59bb69d4--khhgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367db0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-c59bb69d4-khhgm", "timestamp":"2026-04-21 10:21:44.485497571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004df080)} Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.506 [INFO][4464] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.507 [INFO][4464] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.507 [INFO][4464] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.522 [INFO][4464] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" host="localhost" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.535 [INFO][4464] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.541 [INFO][4464] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.543 [INFO][4464] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.546 [INFO][4464] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.546 [INFO][4464] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" host="localhost" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.551 [INFO][4464] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165 Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.574 [INFO][4464] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" host="localhost" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.595 [INFO][4464] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" host="localhost" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.595 [INFO][4464] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" host="localhost" Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.595 [INFO][4464] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:44.645019 containerd[1457]: 2026-04-21 10:21:44.595 [INFO][4464] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" HandleID="k8s-pod-network.6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Workload="localhost-k8s-whisker--c59bb69d4--khhgm-eth0" Apr 21 10:21:44.645830 containerd[1457]: 2026-04-21 10:21:44.597 [INFO][4445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Namespace="calico-system" Pod="whisker-c59bb69d4-khhgm" WorkloadEndpoint="localhost-k8s-whisker--c59bb69d4--khhgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c59bb69d4--khhgm-eth0", GenerateName:"whisker-c59bb69d4-", Namespace:"calico-system", SelfLink:"", UID:"e20ed5d9-8fb7-41be-977b-71a41354d15e", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c59bb69d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-c59bb69d4-khhgm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidfd44e78b8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:44.645830 containerd[1457]: 2026-04-21 10:21:44.597 [INFO][4445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Namespace="calico-system" Pod="whisker-c59bb69d4-khhgm" WorkloadEndpoint="localhost-k8s-whisker--c59bb69d4--khhgm-eth0" Apr 21 10:21:44.645830 containerd[1457]: 2026-04-21 10:21:44.597 [INFO][4445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfd44e78b8c ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Namespace="calico-system" Pod="whisker-c59bb69d4-khhgm" WorkloadEndpoint="localhost-k8s-whisker--c59bb69d4--khhgm-eth0" Apr 21 10:21:44.645830 containerd[1457]: 2026-04-21 10:21:44.607 [INFO][4445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Namespace="calico-system" Pod="whisker-c59bb69d4-khhgm" WorkloadEndpoint="localhost-k8s-whisker--c59bb69d4--khhgm-eth0" Apr 21 10:21:44.645830 containerd[1457]: 2026-04-21 10:21:44.611 [INFO][4445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Namespace="calico-system" Pod="whisker-c59bb69d4-khhgm" WorkloadEndpoint="localhost-k8s-whisker--c59bb69d4--khhgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c59bb69d4--khhgm-eth0", GenerateName:"whisker-c59bb69d4-", Namespace:"calico-system", SelfLink:"", UID:"e20ed5d9-8fb7-41be-977b-71a41354d15e", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c59bb69d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165", Pod:"whisker-c59bb69d4-khhgm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidfd44e78b8c", MAC:"06:12:5e:f7:71:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:44.645830 containerd[1457]: 2026-04-21 10:21:44.641 [INFO][4445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165" Namespace="calico-system" Pod="whisker-c59bb69d4-khhgm" WorkloadEndpoint="localhost-k8s-whisker--c59bb69d4--khhgm-eth0" Apr 21 10:21:44.690611 containerd[1457]: time="2026-04-21T10:21:44.690527073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:44.690611 containerd[1457]: time="2026-04-21T10:21:44.690584641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:44.691044 containerd[1457]: time="2026-04-21T10:21:44.690597394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:44.691591 containerd[1457]: time="2026-04-21T10:21:44.691545280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:44.709072 systemd[1]: Started cri-containerd-6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165.scope - libcontainer container 6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165. Apr 21 10:21:44.738888 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:21:44.750953 kernel: calico-node[4368]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:21:44.779319 containerd[1457]: time="2026-04-21T10:21:44.778894386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c59bb69d4-khhgm,Uid:e20ed5d9-8fb7-41be-977b-71a41354d15e,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165\"" Apr 21 10:21:44.935941 kubelet[2508]: E0421 10:21:44.935719 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:44.948879 kubelet[2508]: E0421 10:21:44.948811 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:44.962644 kubelet[2508]: I0421 10:21:44.962574 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wgmqg" podStartSLOduration=28.962561405 podStartE2EDuration="28.962561405s" podCreationTimestamp="2026-04-21 10:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:21:44.96203665 +0000 UTC m=+35.782385120" watchObservedRunningTime="2026-04-21 10:21:44.962561405 +0000 UTC m=+35.782909876" Apr 21 10:21:45.014618 systemd-networkd[1383]: calidaf837d9e9e: Gained IPv6LL Apr 21 10:21:45.202929 systemd-networkd[1383]: cali49e47f3b705: Gained IPv6LL Apr 21 10:21:45.355823 systemd-networkd[1383]: cali4fe7b90b52e: Gained IPv6LL Apr 21 10:21:45.359783 kubelet[2508]: I0421 10:21:45.359147 2508 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7fa99074-099e-4ed7-ae7d-e7227f1db188" path="/var/lib/kubelet/pods/7fa99074-099e-4ed7-ae7d-e7227f1db188/volumes" Apr 21 10:21:45.457311 systemd-networkd[1383]: vxlan.calico: Link UP Apr 21 10:21:45.457318 systemd-networkd[1383]: vxlan.calico: Gained carrier Apr 21 10:21:45.459515 systemd-networkd[1383]: cali4f4a927720e: Gained IPv6LL Apr 21 10:21:45.843667 systemd-networkd[1383]: cali7472f81b665: Gained IPv6LL Apr 21 10:21:45.951866 kubelet[2508]: E0421 10:21:45.951785 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:45.952304 kubelet[2508]: E0421 10:21:45.951970 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:46.132774 containerd[1457]: time="2026-04-21T10:21:46.132480495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:46.133482 containerd[1457]: time="2026-04-21T10:21:46.133402669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:21:46.134593 containerd[1457]: time="2026-04-21T10:21:46.134555809Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:46.136839 containerd[1457]: time="2026-04-21T10:21:46.136793322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:46.137363 containerd[1457]: time="2026-04-21T10:21:46.137337485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.579909788s" Apr 21 10:21:46.137400 containerd[1457]: time="2026-04-21T10:21:46.137364589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:21:46.139349 containerd[1457]: time="2026-04-21T10:21:46.139321808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:21:46.143685 containerd[1457]: time="2026-04-21T10:21:46.143641876Z" level=info msg="CreateContainer within sandbox \"59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:21:46.165464 containerd[1457]: time="2026-04-21T10:21:46.165409247Z" level=info msg="CreateContainer within sandbox \"59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9bed08ac94efc70e941dbd5faeffa57ae1716a368b547eee83685ebd70fd982f\"" Apr 21 10:21:46.165969 containerd[1457]: time="2026-04-21T10:21:46.165945396Z" level=info msg="StartContainer for \"9bed08ac94efc70e941dbd5faeffa57ae1716a368b547eee83685ebd70fd982f\"" Apr 21 10:21:46.207123 systemd[1]: Started cri-containerd-9bed08ac94efc70e941dbd5faeffa57ae1716a368b547eee83685ebd70fd982f.scope - libcontainer container 9bed08ac94efc70e941dbd5faeffa57ae1716a368b547eee83685ebd70fd982f. Apr 21 10:21:46.267928 containerd[1457]: time="2026-04-21T10:21:46.267706020Z" level=info msg="StartContainer for \"9bed08ac94efc70e941dbd5faeffa57ae1716a368b547eee83685ebd70fd982f\" returns successfully" Apr 21 10:21:46.290133 systemd-networkd[1383]: calidfd44e78b8c: Gained IPv6LL Apr 21 10:21:46.689604 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:35778.service - OpenSSH per-connection server daemon (10.0.0.1:35778). Apr 21 10:21:46.742299 sshd[4727]: Accepted publickey for core from 10.0.0.1 port 35778 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:46.744119 sshd[4727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:46.748268 systemd-logind[1444]: New session 8 of user core. Apr 21 10:21:46.757447 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:21:46.897165 sshd[4727]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:46.900156 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:35778.service: Deactivated successfully. Apr 21 10:21:46.901730 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:21:46.902763 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:21:46.903495 systemd-logind[1444]: Removed session 8. Apr 21 10:21:46.959945 kubelet[2508]: E0421 10:21:46.959631 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:21:47.378425 systemd-networkd[1383]: vxlan.calico: Gained IPv6LL Apr 21 10:21:47.964288 kubelet[2508]: I0421 10:21:47.963968 2508 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:21:48.112447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446962782.mount: Deactivated successfully. Apr 21 10:21:48.420883 containerd[1457]: time="2026-04-21T10:21:48.420660753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:48.422780 containerd[1457]: time="2026-04-21T10:21:48.421890444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:21:48.422780 containerd[1457]: time="2026-04-21T10:21:48.422726683Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:48.428001 containerd[1457]: time="2026-04-21T10:21:48.427781362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:48.429502 containerd[1457]: time="2026-04-21T10:21:48.428593790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.289211808s" Apr 21 10:21:48.429502 containerd[1457]: time="2026-04-21T10:21:48.428617336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:21:48.431614 containerd[1457]: time="2026-04-21T10:21:48.431567198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:21:48.450370 containerd[1457]: time="2026-04-21T10:21:48.450164633Z" level=info msg="CreateContainer within sandbox \"652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:21:48.479638 containerd[1457]: time="2026-04-21T10:21:48.479573765Z" level=info msg="CreateContainer within sandbox \"652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3ce646f56a0946d03f798ce75f6a66a2da6106cfe65e0e89264ade73b0763a9e\"" Apr 21 10:21:48.481658 containerd[1457]: time="2026-04-21T10:21:48.481606546Z" level=info msg="StartContainer for \"3ce646f56a0946d03f798ce75f6a66a2da6106cfe65e0e89264ade73b0763a9e\"" Apr 21 10:21:48.542316 systemd[1]: Started cri-containerd-3ce646f56a0946d03f798ce75f6a66a2da6106cfe65e0e89264ade73b0763a9e.scope - libcontainer container 3ce646f56a0946d03f798ce75f6a66a2da6106cfe65e0e89264ade73b0763a9e. Apr 21 10:21:48.607947 containerd[1457]: time="2026-04-21T10:21:48.607769951Z" level=info msg="StartContainer for \"3ce646f56a0946d03f798ce75f6a66a2da6106cfe65e0e89264ade73b0763a9e\" returns successfully" Apr 21 10:21:48.967547 containerd[1457]: time="2026-04-21T10:21:48.967433316Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:48.971088 containerd[1457]: time="2026-04-21T10:21:48.969447653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:21:48.974587 containerd[1457]: time="2026-04-21T10:21:48.974550889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 542.952997ms" Apr 21 10:21:48.974787 containerd[1457]: time="2026-04-21T10:21:48.974588685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:21:48.976690 containerd[1457]: time="2026-04-21T10:21:48.976347411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:21:48.983489 containerd[1457]: time="2026-04-21T10:21:48.983460928Z" level=info msg="CreateContainer within sandbox \"82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:21:48.995142 kubelet[2508]: I0421 10:21:48.994792 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7999b6f797-z5ch8" podStartSLOduration=19.412634093 podStartE2EDuration="21.994781173s" podCreationTimestamp="2026-04-21 10:21:27 +0000 UTC" firstStartedPulling="2026-04-21 10:21:43.556955563 +0000 UTC m=+34.377304022" lastFinishedPulling="2026-04-21 10:21:46.139102643 +0000 UTC m=+36.959451102" observedRunningTime="2026-04-21 10:21:46.971407881 +0000 UTC m=+37.791756351" watchObservedRunningTime="2026-04-21 10:21:48.994781173 +0000 UTC m=+39.815129642" Apr 21 10:21:48.995142 kubelet[2508]: I0421 10:21:48.994894 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-zv4b9" podStartSLOduration=17.354726581 podStartE2EDuration="21.994890632s" podCreationTimestamp="2026-04-21 10:21:27 +0000 UTC" firstStartedPulling="2026-04-21 10:21:43.79129642 +0000 UTC m=+34.611644883" lastFinishedPulling="2026-04-21 10:21:48.431460476 +0000 UTC m=+39.251808934" observedRunningTime="2026-04-21 10:21:48.994664611 +0000 UTC m=+39.815013083" watchObservedRunningTime="2026-04-21 10:21:48.994890632 +0000 UTC m=+39.815239101" Apr 21 10:21:49.007773 containerd[1457]: time="2026-04-21T10:21:49.007737855Z" level=info msg="CreateContainer within sandbox \"82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e0e3711b46f2df1d521fc73da0eae5342cade57fbdbe82fdcbbbc01ace2c5173\"" Apr 21 10:21:49.008592 containerd[1457]: time="2026-04-21T10:21:49.008508502Z" level=info msg="StartContainer for \"e0e3711b46f2df1d521fc73da0eae5342cade57fbdbe82fdcbbbc01ace2c5173\"" Apr 21 10:21:49.057041 systemd[1]: Started cri-containerd-e0e3711b46f2df1d521fc73da0eae5342cade57fbdbe82fdcbbbc01ace2c5173.scope - libcontainer container e0e3711b46f2df1d521fc73da0eae5342cade57fbdbe82fdcbbbc01ace2c5173. Apr 21 10:21:49.151487 containerd[1457]: time="2026-04-21T10:21:49.151352645Z" level=info msg="StartContainer for \"e0e3711b46f2df1d521fc73da0eae5342cade57fbdbe82fdcbbbc01ace2c5173\" returns successfully" Apr 21 10:21:50.010086 kubelet[2508]: I0421 10:21:50.010024 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7999b6f797-gkbt6" podStartSLOduration=17.914901572 podStartE2EDuration="23.009990454s" podCreationTimestamp="2026-04-21 10:21:27 +0000 UTC" firstStartedPulling="2026-04-21 10:21:43.880671959 +0000 UTC m=+34.701020419" lastFinishedPulling="2026-04-21 10:21:48.975760841 +0000 UTC m=+39.796109301" observedRunningTime="2026-04-21 10:21:50.006714368 +0000 UTC m=+40.827062838" watchObservedRunningTime="2026-04-21 10:21:50.009990454 +0000 UTC m=+40.830338924" Apr 21 10:21:50.023481 systemd[1]: run-containerd-runc-k8s.io-3ce646f56a0946d03f798ce75f6a66a2da6106cfe65e0e89264ade73b0763a9e-runc.SzPn8T.mount: Deactivated successfully. Apr 21 10:21:50.727676 containerd[1457]: time="2026-04-21T10:21:50.727390015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:50.731038 containerd[1457]: time="2026-04-21T10:21:50.730988134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:21:50.738609 containerd[1457]: time="2026-04-21T10:21:50.738205369Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:50.746464 containerd[1457]: time="2026-04-21T10:21:50.746386150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:50.748506 containerd[1457]: time="2026-04-21T10:21:50.747195785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.770826213s" Apr 21 10:21:50.748506 containerd[1457]: time="2026-04-21T10:21:50.747251213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:21:50.759389 containerd[1457]: time="2026-04-21T10:21:50.759307759Z" level=info msg="CreateContainer within sandbox \"6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:21:50.801351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1479641722.mount: Deactivated successfully. Apr 21 10:21:50.814886 containerd[1457]: time="2026-04-21T10:21:50.814657447Z" level=info msg="CreateContainer within sandbox \"6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e6934e13ecd07c14d94c36b277ca3820b027e9d0aa4e4b825e3e36c690a96f3e\"" Apr 21 10:21:50.819941 containerd[1457]: time="2026-04-21T10:21:50.817778150Z" level=info msg="StartContainer for \"e6934e13ecd07c14d94c36b277ca3820b027e9d0aa4e4b825e3e36c690a96f3e\"" Apr 21 10:21:50.892126 systemd[1]: Started cri-containerd-e6934e13ecd07c14d94c36b277ca3820b027e9d0aa4e4b825e3e36c690a96f3e.scope - libcontainer container e6934e13ecd07c14d94c36b277ca3820b027e9d0aa4e4b825e3e36c690a96f3e. Apr 21 10:21:51.008082 containerd[1457]: time="2026-04-21T10:21:51.007628149Z" level=info msg="StartContainer for \"e6934e13ecd07c14d94c36b277ca3820b027e9d0aa4e4b825e3e36c690a96f3e\" returns successfully" Apr 21 10:21:51.010859 containerd[1457]: time="2026-04-21T10:21:51.010407994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:21:51.012095 kubelet[2508]: I0421 10:21:51.012078 2508 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:21:51.914677 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:60280.service - OpenSSH per-connection server daemon (10.0.0.1:60280). Apr 21 10:21:51.973537 sshd[4977]: Accepted publickey for core from 10.0.0.1 port 60280 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:51.975600 sshd[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:51.980069 systemd-logind[1444]: New session 9 of user core. Apr 21 10:21:51.989064 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:21:52.183277 sshd[4977]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:52.186721 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:60280.service: Deactivated successfully. Apr 21 10:21:52.188532 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:21:52.189338 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:21:52.190214 systemd-logind[1444]: Removed session 9. Apr 21 10:21:56.878013 kubelet[2508]: E0421 10:21:56.876284 2508 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.813s" Apr 21 10:21:56.902116 containerd[1457]: time="2026-04-21T10:21:56.902040770Z" level=info msg="StopPodSandbox for \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\"" Apr 21 10:21:56.919395 containerd[1457]: time="2026-04-21T10:21:56.918012065Z" level=info msg="StopPodSandbox for \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\"" Apr 21 10:21:57.216868 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:60282.service - OpenSSH per-connection server daemon (10.0.0.1:60282). Apr 21 10:21:57.317296 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 60282 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:21:57.339251 sshd[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:57.364460 systemd-logind[1444]: New session 10 of user core. Apr 21 10:21:57.371084 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.222 [INFO][5016] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.222 [INFO][5016] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" iface="eth0" netns="/var/run/netns/cni-f17761b2-42c9-1371-4faf-23f25f63ee49" Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.222 [INFO][5016] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" iface="eth0" netns="/var/run/netns/cni-f17761b2-42c9-1371-4faf-23f25f63ee49" Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.223 [INFO][5016] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" iface="eth0" netns="/var/run/netns/cni-f17761b2-42c9-1371-4faf-23f25f63ee49" Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.223 [INFO][5016] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.223 [INFO][5016] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.351 [INFO][5052] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" HandleID="k8s-pod-network.3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.353 [INFO][5052] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.353 [INFO][5052] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.377 [WARNING][5052] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" HandleID="k8s-pod-network.3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.377 [INFO][5052] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" HandleID="k8s-pod-network.3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.380 [INFO][5052] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:57.395407 containerd[1457]: 2026-04-21 10:21:57.384 [INFO][5016] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:21:57.396278 containerd[1457]: time="2026-04-21T10:21:57.395659178Z" level=info msg="TearDown network for sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\" successfully" Apr 21 10:21:57.396278 containerd[1457]: time="2026-04-21T10:21:57.395684724Z" level=info msg="StopPodSandbox for \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\" returns successfully" Apr 21 10:21:57.398072 systemd[1]: run-netns-cni\x2df17761b2\x2d42c9\x2d1371\x2d4faf\x2d23f25f63ee49.mount: Deactivated successfully. Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.219 [INFO][5015] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.220 [INFO][5015] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" iface="eth0" netns="/var/run/netns/cni-b2804636-d1d8-4bce-4534-7f9368afeca3" Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.221 [INFO][5015] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" iface="eth0" netns="/var/run/netns/cni-b2804636-d1d8-4bce-4534-7f9368afeca3" Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.221 [INFO][5015] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" iface="eth0" netns="/var/run/netns/cni-b2804636-d1d8-4bce-4534-7f9368afeca3" Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.221 [INFO][5015] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.221 [INFO][5015] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.370 [INFO][5050] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" HandleID="k8s-pod-network.d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.376 [INFO][5050] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.380 [INFO][5050] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.387 [WARNING][5050] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" HandleID="k8s-pod-network.d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.387 [INFO][5050] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" HandleID="k8s-pod-network.d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.393 [INFO][5050] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:57.415019 containerd[1457]: 2026-04-21 10:21:57.398 [INFO][5015] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:21:57.415019 containerd[1457]: time="2026-04-21T10:21:57.414398941Z" level=info msg="TearDown network for sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\" successfully" Apr 21 10:21:57.415019 containerd[1457]: time="2026-04-21T10:21:57.414425624Z" level=info msg="StopPodSandbox for \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\" returns successfully" Apr 21 10:21:57.416831 systemd[1]: run-netns-cni\x2db2804636\x2dd1d8\x2d4bce\x2d4534\x2d7f9368afeca3.mount: Deactivated successfully. Apr 21 10:21:57.418170 containerd[1457]: time="2026-04-21T10:21:57.418135791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5xcfq,Uid:8f7d6a7b-34e6-4667-9ed5-9310508d9afb,Namespace:calico-system,Attempt:1,}" Apr 21 10:21:57.421454 containerd[1457]: time="2026-04-21T10:21:57.421400485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-657c5b854d-hstpz,Uid:e4b23b05-a52a-44ea-b704-ed8f7e3ac456,Namespace:calico-system,Attempt:1,}" Apr 21 10:21:57.723174 sshd[5048]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:57.729580 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:60282.service: Deactivated successfully. Apr 21 10:21:57.732024 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:21:57.733890 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:21:57.735022 systemd-logind[1444]: Removed session 10. Apr 21 10:21:57.860388 systemd-networkd[1383]: cali56e0508a1ad: Link UP Apr 21 10:21:57.861160 systemd-networkd[1383]: cali56e0508a1ad: Gained carrier Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.612 [INFO][5079] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0 calico-kube-controllers-657c5b854d- calico-system e4b23b05-a52a-44ea-b704-ed8f7e3ac456 1097 0 2026-04-21 10:21:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:657c5b854d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-657c5b854d-hstpz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali56e0508a1ad [] [] }} ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Namespace="calico-system" Pod="calico-kube-controllers-657c5b854d-hstpz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.612 [INFO][5079] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Namespace="calico-system" Pod="calico-kube-controllers-657c5b854d-hstpz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.704 [INFO][5106] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" HandleID="k8s-pod-network.89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.716 [INFO][5106] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" HandleID="k8s-pod-network.89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-657c5b854d-hstpz", "timestamp":"2026-04-21 10:21:57.704526177 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000150580)} Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.716 [INFO][5106] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.716 [INFO][5106] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.716 [INFO][5106] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.729 [INFO][5106] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" host="localhost" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.738 [INFO][5106] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.744 [INFO][5106] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.746 [INFO][5106] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.750 [INFO][5106] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.750 [INFO][5106] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" host="localhost" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.768 [INFO][5106] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2 Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.792 [INFO][5106] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" host="localhost" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.834 [INFO][5106] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" host="localhost" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.836 [INFO][5106] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" host="localhost" Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.837 [INFO][5106] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:57.894414 containerd[1457]: 2026-04-21 10:21:57.837 [INFO][5106] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" HandleID="k8s-pod-network.89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:21:57.945116 containerd[1457]: 2026-04-21 10:21:57.844 [INFO][5079] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Namespace="calico-system" Pod="calico-kube-controllers-657c5b854d-hstpz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0", GenerateName:"calico-kube-controllers-657c5b854d-", Namespace:"calico-system", SelfLink:"", UID:"e4b23b05-a52a-44ea-b704-ed8f7e3ac456", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"657c5b854d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-657c5b854d-hstpz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56e0508a1ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:57.945116 containerd[1457]: 2026-04-21 10:21:57.854 [INFO][5079] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Namespace="calico-system" Pod="calico-kube-controllers-657c5b854d-hstpz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:21:57.945116 containerd[1457]: 2026-04-21 10:21:57.854 [INFO][5079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56e0508a1ad ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Namespace="calico-system" Pod="calico-kube-controllers-657c5b854d-hstpz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:21:57.945116 containerd[1457]: 2026-04-21 10:21:57.863 [INFO][5079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Namespace="calico-system" Pod="calico-kube-controllers-657c5b854d-hstpz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:21:57.945116 containerd[1457]: 2026-04-21 10:21:57.864 [INFO][5079] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Namespace="calico-system" Pod="calico-kube-controllers-657c5b854d-hstpz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0", GenerateName:"calico-kube-controllers-657c5b854d-", Namespace:"calico-system", SelfLink:"", UID:"e4b23b05-a52a-44ea-b704-ed8f7e3ac456", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"657c5b854d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2", Pod:"calico-kube-controllers-657c5b854d-hstpz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56e0508a1ad", MAC:"ca:b6:d6:b7:21:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:57.945116 containerd[1457]: 2026-04-21 10:21:57.888 [INFO][5079] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2" Namespace="calico-system" Pod="calico-kube-controllers-657c5b854d-hstpz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:21:57.962318 containerd[1457]: time="2026-04-21T10:21:57.961714867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:57.965859 containerd[1457]: time="2026-04-21T10:21:57.965546149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:21:57.966629 containerd[1457]: time="2026-04-21T10:21:57.966517037Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:57.987184 containerd[1457]: time="2026-04-21T10:21:57.985297064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:57.992945 containerd[1457]: time="2026-04-21T10:21:57.991441125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 6.980995297s" Apr 21 10:21:57.992945 containerd[1457]: time="2026-04-21T10:21:57.991493394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:21:58.013994 containerd[1457]: time="2026-04-21T10:21:58.013325900Z" level=info msg="CreateContainer within sandbox \"6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:21:58.052268 containerd[1457]: time="2026-04-21T10:21:58.051857819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:58.052268 containerd[1457]: time="2026-04-21T10:21:58.051996510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:58.052268 containerd[1457]: time="2026-04-21T10:21:58.052019229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:58.052268 containerd[1457]: time="2026-04-21T10:21:58.052142929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:58.071762 containerd[1457]: time="2026-04-21T10:21:58.071608498Z" level=info msg="CreateContainer within sandbox \"6a8ebac8466c310edcc58015989d3d0f1ba7544f10b83b13a560471e0fe87165\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"991cabb26ed8a85c3005a8e87fc99a7a5fea3671364eceb18f7b253b08ac58a8\"" Apr 21 10:21:58.077895 containerd[1457]: time="2026-04-21T10:21:58.077537636Z" level=info msg="StartContainer for \"991cabb26ed8a85c3005a8e87fc99a7a5fea3671364eceb18f7b253b08ac58a8\"" Apr 21 10:21:58.094360 systemd-networkd[1383]: cali92c651caf71: Link UP Apr 21 10:21:58.095511 systemd-networkd[1383]: cali92c651caf71: Gained carrier Apr 21 10:21:58.125465 systemd[1]: Started cri-containerd-89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2.scope - libcontainer container 89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2. Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.611 [INFO][5075] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5xcfq-eth0 csi-node-driver- calico-system 8f7d6a7b-34e6-4667-9ed5-9310508d9afb 1096 0 2026-04-21 10:21:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5xcfq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali92c651caf71 [] [] }} ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Namespace="calico-system" Pod="csi-node-driver-5xcfq" WorkloadEndpoint="localhost-k8s-csi--node--driver--5xcfq-" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.612 [INFO][5075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Namespace="calico-system" Pod="csi-node-driver-5xcfq" WorkloadEndpoint="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.713 [INFO][5112] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" HandleID="k8s-pod-network.0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.736 [INFO][5112] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" HandleID="k8s-pod-network.0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5xcfq", "timestamp":"2026-04-21 10:21:57.713253998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000218420)} Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.736 [INFO][5112] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.836 [INFO][5112] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.837 [INFO][5112] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.854 [INFO][5112] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" host="localhost" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.873 [INFO][5112] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.967 [INFO][5112] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.995 [INFO][5112] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.998 [INFO][5112] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:57.998 [INFO][5112] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" host="localhost" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:58.012 [INFO][5112] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:58.048 [INFO][5112] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" host="localhost" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:58.084 [INFO][5112] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" host="localhost" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:58.085 [INFO][5112] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" host="localhost" Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:58.085 [INFO][5112] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:21:58.137262 containerd[1457]: 2026-04-21 10:21:58.085 [INFO][5112] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" HandleID="k8s-pod-network.0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:21:58.137815 containerd[1457]: 2026-04-21 10:21:58.089 [INFO][5075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Namespace="calico-system" Pod="csi-node-driver-5xcfq" WorkloadEndpoint="localhost-k8s-csi--node--driver--5xcfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5xcfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f7d6a7b-34e6-4667-9ed5-9310508d9afb", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5xcfq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92c651caf71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:58.137815 containerd[1457]: 2026-04-21 10:21:58.090 [INFO][5075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Namespace="calico-system" Pod="csi-node-driver-5xcfq" WorkloadEndpoint="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:21:58.137815 containerd[1457]: 2026-04-21 10:21:58.090 [INFO][5075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92c651caf71 ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Namespace="calico-system" Pod="csi-node-driver-5xcfq" WorkloadEndpoint="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:21:58.137815 containerd[1457]: 2026-04-21 10:21:58.096 [INFO][5075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Namespace="calico-system" Pod="csi-node-driver-5xcfq" WorkloadEndpoint="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:21:58.137815 containerd[1457]: 2026-04-21 10:21:58.097 [INFO][5075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Namespace="calico-system" Pod="csi-node-driver-5xcfq" WorkloadEndpoint="localhost-k8s-csi--node--driver--5xcfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5xcfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f7d6a7b-34e6-4667-9ed5-9310508d9afb", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd", Pod:"csi-node-driver-5xcfq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92c651caf71", MAC:"22:49:0c:c1:dd:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:21:58.137815 containerd[1457]: 2026-04-21 10:21:58.133 [INFO][5075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd" Namespace="calico-system" Pod="csi-node-driver-5xcfq" WorkloadEndpoint="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:21:58.156844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072459862.mount: Deactivated successfully. Apr 21 10:21:58.169610 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:21:58.187040 systemd[1]: Started cri-containerd-991cabb26ed8a85c3005a8e87fc99a7a5fea3671364eceb18f7b253b08ac58a8.scope - libcontainer container 991cabb26ed8a85c3005a8e87fc99a7a5fea3671364eceb18f7b253b08ac58a8. Apr 21 10:21:58.222269 containerd[1457]: time="2026-04-21T10:21:58.220192879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:21:58.222269 containerd[1457]: time="2026-04-21T10:21:58.220255988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:21:58.222269 containerd[1457]: time="2026-04-21T10:21:58.220264804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:58.222269 containerd[1457]: time="2026-04-21T10:21:58.220380288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:21:58.307072 systemd[1]: Started cri-containerd-0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd.scope - libcontainer container 0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd. Apr 21 10:21:58.333377 containerd[1457]: time="2026-04-21T10:21:58.333235864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-657c5b854d-hstpz,Uid:e4b23b05-a52a-44ea-b704-ed8f7e3ac456,Namespace:calico-system,Attempt:1,} returns sandbox id \"89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2\"" Apr 21 10:21:58.375764 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:21:58.394241 containerd[1457]: time="2026-04-21T10:21:58.389378052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:21:58.450024 containerd[1457]: time="2026-04-21T10:21:58.449828055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5xcfq,Uid:8f7d6a7b-34e6-4667-9ed5-9310508d9afb,Namespace:calico-system,Attempt:1,} returns sandbox id \"0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd\"" Apr 21 10:21:58.450024 containerd[1457]: time="2026-04-21T10:21:58.449960865Z" level=info msg="StartContainer for \"991cabb26ed8a85c3005a8e87fc99a7a5fea3671364eceb18f7b253b08ac58a8\" returns successfully" Apr 21 10:21:58.898986 systemd-networkd[1383]: cali56e0508a1ad: Gained IPv6LL Apr 21 10:21:58.925776 kubelet[2508]: I0421 10:21:58.925053 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-c59bb69d4-khhgm" podStartSLOduration=2.71232103 podStartE2EDuration="15.923735094s" podCreationTimestamp="2026-04-21 10:21:43 +0000 UTC" firstStartedPulling="2026-04-21 10:21:44.784755411 +0000 UTC m=+35.605103871" lastFinishedPulling="2026-04-21 10:21:57.996169476 +0000 UTC m=+48.816517935" observedRunningTime="2026-04-21 10:21:58.920846112 +0000 UTC m=+49.741194575" watchObservedRunningTime="2026-04-21 10:21:58.923735094 +0000 UTC m=+49.744083571" Apr 21 10:21:59.991202 systemd-networkd[1383]: cali92c651caf71: Gained IPv6LL Apr 21 10:22:02.744412 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:33122.service - OpenSSH per-connection server daemon (10.0.0.1:33122). Apr 21 10:22:02.779624 containerd[1457]: time="2026-04-21T10:22:02.779438714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:02.781361 containerd[1457]: time="2026-04-21T10:22:02.780758901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:22:02.781963 containerd[1457]: time="2026-04-21T10:22:02.781825345Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:02.793364 containerd[1457]: time="2026-04-21T10:22:02.792874591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:02.855644 containerd[1457]: time="2026-04-21T10:22:02.855421837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.461170955s" Apr 21 10:22:02.855644 containerd[1457]: time="2026-04-21T10:22:02.855486899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:22:02.863573 containerd[1457]: time="2026-04-21T10:22:02.863097138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:22:02.951539 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 33122 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:02.954415 sshd[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:02.957978 containerd[1457]: time="2026-04-21T10:22:02.957882448Z" level=info msg="CreateContainer within sandbox \"89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:22:02.996768 systemd-logind[1444]: New session 11 of user core. Apr 21 10:22:03.008397 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:22:03.134293 containerd[1457]: time="2026-04-21T10:22:03.130343696Z" level=info msg="CreateContainer within sandbox \"89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"93b8ffe5b854c03e4632eb2abd566e24990a512de41469c914f739ac8f7d3f32\"" Apr 21 10:22:03.136300 containerd[1457]: time="2026-04-21T10:22:03.135303942Z" level=info msg="StartContainer for \"93b8ffe5b854c03e4632eb2abd566e24990a512de41469c914f739ac8f7d3f32\"" Apr 21 10:22:03.224119 systemd[1]: Started cri-containerd-93b8ffe5b854c03e4632eb2abd566e24990a512de41469c914f739ac8f7d3f32.scope - libcontainer container 93b8ffe5b854c03e4632eb2abd566e24990a512de41469c914f739ac8f7d3f32. Apr 21 10:22:03.500421 containerd[1457]: time="2026-04-21T10:22:03.500125682Z" level=info msg="StartContainer for \"93b8ffe5b854c03e4632eb2abd566e24990a512de41469c914f739ac8f7d3f32\" returns successfully" Apr 21 10:22:03.726983 sshd[5319]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:03.742891 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:33122.service: Deactivated successfully. Apr 21 10:22:03.750939 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:22:03.753665 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:22:03.778625 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:33128.service - OpenSSH per-connection server daemon (10.0.0.1:33128). Apr 21 10:22:03.780705 systemd-logind[1444]: Removed session 11. Apr 21 10:22:03.955611 sshd[5390]: Accepted publickey for core from 10.0.0.1 port 33128 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:03.955264 sshd[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:03.990954 systemd-logind[1444]: New session 12 of user core. Apr 21 10:22:03.999367 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:22:04.373793 sshd[5390]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:04.481054 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:33128.service: Deactivated successfully. Apr 21 10:22:04.508484 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:22:04.520897 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:22:04.543962 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:33136.service - OpenSSH per-connection server daemon (10.0.0.1:33136). Apr 21 10:22:04.553697 systemd-logind[1444]: Removed session 12. Apr 21 10:22:04.662293 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 33136 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:04.665428 sshd[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:04.717613 systemd-logind[1444]: New session 13 of user core. Apr 21 10:22:04.734961 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:22:05.334893 sshd[5406]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:05.369688 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:33136.service: Deactivated successfully. Apr 21 10:22:05.394981 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:22:05.454958 kubelet[2508]: I0421 10:22:05.454825 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-657c5b854d-hstpz" podStartSLOduration=33.986018239 podStartE2EDuration="38.454783051s" podCreationTimestamp="2026-04-21 10:21:27 +0000 UTC" firstStartedPulling="2026-04-21 10:21:58.388860045 +0000 UTC m=+49.209208504" lastFinishedPulling="2026-04-21 10:22:02.857624857 +0000 UTC m=+53.677973316" observedRunningTime="2026-04-21 10:22:04.067484257 +0000 UTC m=+54.887832716" watchObservedRunningTime="2026-04-21 10:22:05.454783051 +0000 UTC m=+56.275131518" Apr 21 10:22:05.460039 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:22:05.476645 systemd-logind[1444]: Removed session 13. Apr 21 10:22:05.545717 containerd[1457]: time="2026-04-21T10:22:05.545632611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:05.552364 containerd[1457]: time="2026-04-21T10:22:05.550357927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:22:05.565847 containerd[1457]: time="2026-04-21T10:22:05.565538518Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:05.586662 containerd[1457]: time="2026-04-21T10:22:05.585132109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:05.592017 containerd[1457]: time="2026-04-21T10:22:05.591963410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.728812062s" Apr 21 10:22:05.592110 containerd[1457]: time="2026-04-21T10:22:05.592002814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:22:05.628938 containerd[1457]: time="2026-04-21T10:22:05.628695962Z" level=info msg="CreateContainer within sandbox \"0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:22:05.738602 containerd[1457]: time="2026-04-21T10:22:05.738519285Z" level=info msg="CreateContainer within sandbox \"0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0ea1dc34e368de33ef263367b623ba833cbd640e01b477611e2351e59133804e\"" Apr 21 10:22:05.741323 containerd[1457]: time="2026-04-21T10:22:05.741125105Z" level=info msg="StartContainer for \"0ea1dc34e368de33ef263367b623ba833cbd640e01b477611e2351e59133804e\"" Apr 21 10:22:05.866686 systemd[1]: Started cri-containerd-0ea1dc34e368de33ef263367b623ba833cbd640e01b477611e2351e59133804e.scope - libcontainer container 0ea1dc34e368de33ef263367b623ba833cbd640e01b477611e2351e59133804e. Apr 21 10:22:05.988333 containerd[1457]: time="2026-04-21T10:22:05.988272506Z" level=info msg="StartContainer for \"0ea1dc34e368de33ef263367b623ba833cbd640e01b477611e2351e59133804e\" returns successfully" Apr 21 10:22:05.990084 containerd[1457]: time="2026-04-21T10:22:05.990037336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:22:08.463526 containerd[1457]: time="2026-04-21T10:22:08.463301145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:08.465965 containerd[1457]: time="2026-04-21T10:22:08.465813789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:22:08.467971 containerd[1457]: time="2026-04-21T10:22:08.466838621Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:08.474102 containerd[1457]: time="2026-04-21T10:22:08.474074307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:08.475522 containerd[1457]: time="2026-04-21T10:22:08.475484404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.48509168s" Apr 21 10:22:08.475602 containerd[1457]: time="2026-04-21T10:22:08.475531010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:22:08.518131 containerd[1457]: time="2026-04-21T10:22:08.517971480Z" level=info msg="CreateContainer within sandbox \"0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:22:08.544524 containerd[1457]: time="2026-04-21T10:22:08.544431639Z" level=info msg="CreateContainer within sandbox \"0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e3752ec560be4186dfeb8d2119d92e8de2f0eaa80e7802aea6e634a31a36481\"" Apr 21 10:22:08.546292 containerd[1457]: time="2026-04-21T10:22:08.546254379Z" level=info msg="StartContainer for \"8e3752ec560be4186dfeb8d2119d92e8de2f0eaa80e7802aea6e634a31a36481\"" Apr 21 10:22:08.679011 systemd[1]: Started cri-containerd-8e3752ec560be4186dfeb8d2119d92e8de2f0eaa80e7802aea6e634a31a36481.scope - libcontainer container 8e3752ec560be4186dfeb8d2119d92e8de2f0eaa80e7802aea6e634a31a36481. Apr 21 10:22:08.977143 containerd[1457]: time="2026-04-21T10:22:08.977034998Z" level=info msg="StartContainer for \"8e3752ec560be4186dfeb8d2119d92e8de2f0eaa80e7802aea6e634a31a36481\" returns successfully" Apr 21 10:22:09.373255 containerd[1457]: time="2026-04-21T10:22:09.373135761Z" level=info msg="StopPodSandbox for \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\"" Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.651 [WARNING][5552] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--4jtjk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241", Pod:"coredns-7d764666f9-4jtjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fe7b90b52e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.653 [INFO][5552] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.653 [INFO][5552] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" iface="eth0" netns="" Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.653 [INFO][5552] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.653 [INFO][5552] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.798 [INFO][5561] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" HandleID="k8s-pod-network.6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.799 [INFO][5561] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.799 [INFO][5561] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.866 [WARNING][5561] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" HandleID="k8s-pod-network.6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.866 [INFO][5561] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" HandleID="k8s-pod-network.6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.956 [INFO][5561] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:09.981312 containerd[1457]: 2026-04-21 10:22:09.974 [INFO][5552] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:22:09.981312 containerd[1457]: time="2026-04-21T10:22:09.981113208Z" level=info msg="TearDown network for sandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\" successfully" Apr 21 10:22:09.981312 containerd[1457]: time="2026-04-21T10:22:09.981133719Z" level=info msg="StopPodSandbox for \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\" returns successfully" Apr 21 10:22:10.131601 containerd[1457]: time="2026-04-21T10:22:10.131138028Z" level=info msg="RemovePodSandbox for \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\"" Apr 21 10:22:10.136633 containerd[1457]: time="2026-04-21T10:22:10.136561854Z" level=info msg="Forcibly stopping sandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\"" Apr 21 10:22:10.142858 kubelet[2508]: I0421 10:22:10.142271 2508 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:22:10.143269 kubelet[2508]: I0421 10:22:10.143136 2508 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:22:10.435370 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:43966.service - OpenSSH per-connection server daemon (10.0.0.1:43966). Apr 21 10:22:10.579507 sshd[5588]: Accepted publickey for core from 10.0.0.1 port 43966 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:10.656416 sshd[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:10.754737 kubelet[2508]: I0421 10:22:10.752687 2508 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:10.754688 systemd-logind[1444]: New session 14 of user core. Apr 21 10:22:10.762356 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.416 [WARNING][5580] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--4jtjk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6645c0ad-7034-4d39-a7d9-4a1d8fcc7de5", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46ac7c417dabe0380be8dff95a96545ca8e2899656e96e70275e52d986779241", Pod:"coredns-7d764666f9-4jtjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fe7b90b52e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.420 [INFO][5580] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.431 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" iface="eth0" netns="" Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.431 [INFO][5580] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.431 [INFO][5580] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.563 [INFO][5590] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" HandleID="k8s-pod-network.6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.564 [INFO][5590] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.564 [INFO][5590] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.732 [WARNING][5590] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" HandleID="k8s-pod-network.6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.736 [INFO][5590] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" HandleID="k8s-pod-network.6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Workload="localhost-k8s-coredns--7d764666f9--4jtjk-eth0" Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.789 [INFO][5590] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:10.797963 containerd[1457]: 2026-04-21 10:22:10.794 [INFO][5580] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d" Apr 21 10:22:10.797963 containerd[1457]: time="2026-04-21T10:22:10.797209571Z" level=info msg="TearDown network for sandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\" successfully" Apr 21 10:22:10.864118 containerd[1457]: time="2026-04-21T10:22:10.864004356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:22:10.865233 containerd[1457]: time="2026-04-21T10:22:10.865203220Z" level=info msg="RemovePodSandbox \"6db9760c2f895e0a14c5625e84297729e01bd3ef2ac6b9a8ce968bb1e4703d7d\" returns successfully" Apr 21 10:22:10.875405 containerd[1457]: time="2026-04-21T10:22:10.875363465Z" level=info msg="StopPodSandbox for \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\"" Apr 21 10:22:11.017726 kubelet[2508]: I0421 10:22:11.017396 2508 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-5xcfq" podStartSLOduration=33.998197175 podStartE2EDuration="44.017379269s" podCreationTimestamp="2026-04-21 10:21:27 +0000 UTC" firstStartedPulling="2026-04-21 10:21:58.457669646 +0000 UTC m=+49.278018106" lastFinishedPulling="2026-04-21 10:22:08.476851742 +0000 UTC m=+59.297200200" observedRunningTime="2026-04-21 10:22:09.247611938 +0000 UTC m=+60.067960397" watchObservedRunningTime="2026-04-21 10:22:11.017379269 +0000 UTC m=+61.837727742" Apr 21 10:22:11.586477 sshd[5588]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:11.590267 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:22:11.591430 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:43966.service: Deactivated successfully. Apr 21 10:22:11.593660 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:22:11.595199 systemd-logind[1444]: Removed session 14. Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.359 [WARNING][5617] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0", GenerateName:"calico-kube-controllers-657c5b854d-", Namespace:"calico-system", SelfLink:"", UID:"e4b23b05-a52a-44ea-b704-ed8f7e3ac456", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"657c5b854d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2", Pod:"calico-kube-controllers-657c5b854d-hstpz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56e0508a1ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.361 [INFO][5617] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.361 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" iface="eth0" netns="" Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.361 [INFO][5617] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.361 [INFO][5617] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.582 [INFO][5628] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" HandleID="k8s-pod-network.d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.582 [INFO][5628] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.582 [INFO][5628] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.591 [WARNING][5628] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" HandleID="k8s-pod-network.d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.591 [INFO][5628] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" HandleID="k8s-pod-network.d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.593 [INFO][5628] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:11.598438 containerd[1457]: 2026-04-21 10:22:11.595 [INFO][5617] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:22:11.598438 containerd[1457]: time="2026-04-21T10:22:11.598286306Z" level=info msg="TearDown network for sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\" successfully" Apr 21 10:22:11.598438 containerd[1457]: time="2026-04-21T10:22:11.598316327Z" level=info msg="StopPodSandbox for \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\" returns successfully" Apr 21 10:22:11.600050 containerd[1457]: time="2026-04-21T10:22:11.599716072Z" level=info msg="RemovePodSandbox for \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\"" Apr 21 10:22:11.600050 containerd[1457]: time="2026-04-21T10:22:11.599744168Z" level=info msg="Forcibly stopping sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\"" Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:11.954 [WARNING][5648] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0", GenerateName:"calico-kube-controllers-657c5b854d-", Namespace:"calico-system", SelfLink:"", UID:"e4b23b05-a52a-44ea-b704-ed8f7e3ac456", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"657c5b854d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89cf60ded6888b8f10a231fcc2d0c343f14f4353af7ffe7a434a374b5ef75fd2", Pod:"calico-kube-controllers-657c5b854d-hstpz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56e0508a1ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:11.954 [INFO][5648] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:11.954 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" iface="eth0" netns="" Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:11.954 [INFO][5648] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:11.954 [INFO][5648] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:12.033 [INFO][5657] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" HandleID="k8s-pod-network.d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:12.033 [INFO][5657] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:12.033 [INFO][5657] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:12.071 [WARNING][5657] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" HandleID="k8s-pod-network.d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:12.073 [INFO][5657] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" HandleID="k8s-pod-network.d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Workload="localhost-k8s-calico--kube--controllers--657c5b854d--hstpz-eth0" Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:12.108 [INFO][5657] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:12.120250 containerd[1457]: 2026-04-21 10:22:12.114 [INFO][5648] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7" Apr 21 10:22:12.124560 containerd[1457]: time="2026-04-21T10:22:12.120731202Z" level=info msg="TearDown network for sandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\" successfully" Apr 21 10:22:12.145993 containerd[1457]: time="2026-04-21T10:22:12.145426773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:22:12.145993 containerd[1457]: time="2026-04-21T10:22:12.145712634Z" level=info msg="RemovePodSandbox \"d249da826e9903a00ab5ef91a9c1c3560ce2118a5aa770efca55723ab0c793d7\" returns successfully" Apr 21 10:22:12.156721 containerd[1457]: time="2026-04-21T10:22:12.156604165Z" level=info msg="StopPodSandbox for \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\"" Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.385 [WARNING][5674] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5xcfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f7d6a7b-34e6-4667-9ed5-9310508d9afb", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd", Pod:"csi-node-driver-5xcfq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92c651caf71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.386 [INFO][5674] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.386 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" iface="eth0" netns="" Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.386 [INFO][5674] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.386 [INFO][5674] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.448 [INFO][5682] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" HandleID="k8s-pod-network.3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.449 [INFO][5682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.449 [INFO][5682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.485 [WARNING][5682] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" HandleID="k8s-pod-network.3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.485 [INFO][5682] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" HandleID="k8s-pod-network.3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.488 [INFO][5682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:12.492654 containerd[1457]: 2026-04-21 10:22:12.490 [INFO][5674] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:22:12.494011 containerd[1457]: time="2026-04-21T10:22:12.492828836Z" level=info msg="TearDown network for sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\" successfully" Apr 21 10:22:12.494011 containerd[1457]: time="2026-04-21T10:22:12.492893209Z" level=info msg="StopPodSandbox for \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\" returns successfully" Apr 21 10:22:12.499890 containerd[1457]: time="2026-04-21T10:22:12.499691145Z" level=info msg="RemovePodSandbox for \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\"" Apr 21 10:22:12.499890 containerd[1457]: time="2026-04-21T10:22:12.499832707Z" level=info msg="Forcibly stopping sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\"" Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.641 [WARNING][5699] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5xcfq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f7d6a7b-34e6-4667-9ed5-9310508d9afb", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b4217ea04e2ff861cea418dc0a46948a64280b21b70213f3989dd5d74a682fd", Pod:"csi-node-driver-5xcfq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92c651caf71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.642 [INFO][5699] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.642 [INFO][5699] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" iface="eth0" netns="" Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.642 [INFO][5699] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.642 [INFO][5699] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.715 [INFO][5708] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" HandleID="k8s-pod-network.3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.721 [INFO][5708] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.722 [INFO][5708] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.775 [WARNING][5708] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" HandleID="k8s-pod-network.3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.775 [INFO][5708] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" HandleID="k8s-pod-network.3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Workload="localhost-k8s-csi--node--driver--5xcfq-eth0" Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.789 [INFO][5708] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:12.802671 containerd[1457]: 2026-04-21 10:22:12.799 [INFO][5699] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d" Apr 21 10:22:12.804270 containerd[1457]: time="2026-04-21T10:22:12.802687872Z" level=info msg="TearDown network for sandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\" successfully" Apr 21 10:22:12.816814 containerd[1457]: time="2026-04-21T10:22:12.816704145Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:22:12.817310 containerd[1457]: time="2026-04-21T10:22:12.817069864Z" level=info msg="RemovePodSandbox \"3fe1594c9ce9e6ccbf54079a6310b5f41682e6652f0ecf924f4ff83547cd565d\" returns successfully" Apr 21 10:22:12.819112 containerd[1457]: time="2026-04-21T10:22:12.819086285Z" level=info msg="StopPodSandbox for \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\"" Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.091 [WARNING][5725] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" WorkloadEndpoint="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.092 [INFO][5725] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.092 [INFO][5725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" iface="eth0" netns="" Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.093 [INFO][5725] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.093 [INFO][5725] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.173 [INFO][5734] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" HandleID="k8s-pod-network.870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Workload="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.187 [INFO][5734] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.196 [INFO][5734] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.287 [WARNING][5734] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" HandleID="k8s-pod-network.870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Workload="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.289 [INFO][5734] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" HandleID="k8s-pod-network.870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Workload="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.363 [INFO][5734] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:13.371040 containerd[1457]: 2026-04-21 10:22:13.367 [INFO][5725] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:22:13.372770 containerd[1457]: time="2026-04-21T10:22:13.371579237Z" level=info msg="TearDown network for sandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\" successfully" Apr 21 10:22:13.372770 containerd[1457]: time="2026-04-21T10:22:13.371736610Z" level=info msg="StopPodSandbox for \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\" returns successfully" Apr 21 10:22:13.374115 containerd[1457]: time="2026-04-21T10:22:13.374055490Z" level=info msg="RemovePodSandbox for \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\"" Apr 21 10:22:13.374221 containerd[1457]: time="2026-04-21T10:22:13.374134456Z" level=info msg="Forcibly stopping sandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\"" Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.614 [WARNING][5753] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" WorkloadEndpoint="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.615 [INFO][5753] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.615 [INFO][5753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" iface="eth0" netns="" Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.615 [INFO][5753] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.615 [INFO][5753] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.736 [INFO][5763] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" HandleID="k8s-pod-network.870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Workload="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.736 [INFO][5763] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.736 [INFO][5763] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.772 [WARNING][5763] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" HandleID="k8s-pod-network.870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Workload="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.773 [INFO][5763] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" HandleID="k8s-pod-network.870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Workload="localhost-k8s-whisker--7f78d88875--ckm7h-eth0" Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.826 [INFO][5763] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:13.843281 containerd[1457]: 2026-04-21 10:22:13.836 [INFO][5753] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44" Apr 21 10:22:13.843281 containerd[1457]: time="2026-04-21T10:22:13.840514846Z" level=info msg="TearDown network for sandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\" successfully" Apr 21 10:22:14.040558 containerd[1457]: time="2026-04-21T10:22:14.040413218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:22:14.042312 containerd[1457]: time="2026-04-21T10:22:14.041706976Z" level=info msg="RemovePodSandbox \"870edc23b7532d9bb72f333455d2b684f8e81a256e93994125373591d5d3ca44\" returns successfully" Apr 21 10:22:14.059209 containerd[1457]: time="2026-04-21T10:22:14.056963657Z" level=info msg="StopPodSandbox for \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\"" Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.352 [WARNING][5781] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"fd00ac23-0fb5-4d7f-956d-e123593d4ebc", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530", Pod:"goldmane-9f7667bb8-zv4b9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidaf837d9e9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.353 [INFO][5781] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.353 [INFO][5781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" iface="eth0" netns="" Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.353 [INFO][5781] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.353 [INFO][5781] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.746 [INFO][5790] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" HandleID="k8s-pod-network.8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.747 [INFO][5790] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.747 [INFO][5790] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.814 [WARNING][5790] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" HandleID="k8s-pod-network.8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.815 [INFO][5790] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" HandleID="k8s-pod-network.8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.843 [INFO][5790] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:14.860235 containerd[1457]: 2026-04-21 10:22:14.852 [INFO][5781] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:22:14.939532 containerd[1457]: time="2026-04-21T10:22:14.864158613Z" level=info msg="TearDown network for sandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\" successfully" Apr 21 10:22:14.939532 containerd[1457]: time="2026-04-21T10:22:14.864476974Z" level=info msg="StopPodSandbox for \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\" returns successfully" Apr 21 10:22:14.962095 containerd[1457]: time="2026-04-21T10:22:14.961679239Z" level=info msg="RemovePodSandbox for \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\"" Apr 21 10:22:14.962095 containerd[1457]: time="2026-04-21T10:22:14.962100194Z" level=info msg="Forcibly stopping sandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\"" Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.207 [WARNING][5817] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"fd00ac23-0fb5-4d7f-956d-e123593d4ebc", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"652a1875a966038447c81458fdb0d4825300edcd38704117ac343d1de297b530", Pod:"goldmane-9f7667bb8-zv4b9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidaf837d9e9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.208 [INFO][5817] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.208 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" iface="eth0" netns="" Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.208 [INFO][5817] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.208 [INFO][5817] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.246 [INFO][5837] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" HandleID="k8s-pod-network.8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.247 [INFO][5837] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.247 [INFO][5837] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.350 [WARNING][5837] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" HandleID="k8s-pod-network.8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.350 [INFO][5837] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" HandleID="k8s-pod-network.8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Workload="localhost-k8s-goldmane--9f7667bb8--zv4b9-eth0" Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.382 [INFO][5837] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:15.389752 containerd[1457]: 2026-04-21 10:22:15.386 [INFO][5817] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a" Apr 21 10:22:15.391684 containerd[1457]: time="2026-04-21T10:22:15.390305747Z" level=info msg="TearDown network for sandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\" successfully" Apr 21 10:22:15.396991 containerd[1457]: time="2026-04-21T10:22:15.396938059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:22:15.397091 containerd[1457]: time="2026-04-21T10:22:15.397002395Z" level=info msg="RemovePodSandbox \"8fa287f303d1d7303401cea156b6806520c2898c2ff40e5f0d9809fe0b288b2a\" returns successfully" Apr 21 10:22:15.397404 containerd[1457]: time="2026-04-21T10:22:15.397377188Z" level=info msg="StopPodSandbox for \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\"" Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:15.971 [WARNING][5855] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0", GenerateName:"calico-apiserver-7999b6f797-", Namespace:"calico-system", SelfLink:"", UID:"c70b1305-257f-44b6-ab9b-a0c251378e0f", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7999b6f797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3", Pod:"calico-apiserver-7999b6f797-z5ch8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali49e47f3b705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:15.972 [INFO][5855] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:15.972 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" iface="eth0" netns="" Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:15.972 [INFO][5855] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:15.972 [INFO][5855] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:16.074 [INFO][5864] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" HandleID="k8s-pod-network.ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:16.075 [INFO][5864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:16.075 [INFO][5864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:16.083 [WARNING][5864] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" HandleID="k8s-pod-network.ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:16.083 [INFO][5864] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" HandleID="k8s-pod-network.ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:16.108 [INFO][5864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:16.117190 containerd[1457]: 2026-04-21 10:22:16.111 [INFO][5855] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:22:16.117190 containerd[1457]: time="2026-04-21T10:22:16.115298218Z" level=info msg="TearDown network for sandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\" successfully" Apr 21 10:22:16.117190 containerd[1457]: time="2026-04-21T10:22:16.115498724Z" level=info msg="StopPodSandbox for \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\" returns successfully" Apr 21 10:22:16.120384 containerd[1457]: time="2026-04-21T10:22:16.119663764Z" level=info msg="RemovePodSandbox for \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\"" Apr 21 10:22:16.120384 containerd[1457]: time="2026-04-21T10:22:16.119689887Z" level=info msg="Forcibly stopping sandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\"" Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.234 [WARNING][5883] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0", GenerateName:"calico-apiserver-7999b6f797-", Namespace:"calico-system", SelfLink:"", UID:"c70b1305-257f-44b6-ab9b-a0c251378e0f", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7999b6f797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59c63e470cc45fe84147b86e5e9fba1cc961660685e2e2a2abe95edb6968c6c3", Pod:"calico-apiserver-7999b6f797-z5ch8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali49e47f3b705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.235 [INFO][5883] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.235 [INFO][5883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" iface="eth0" netns="" Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.235 [INFO][5883] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.235 [INFO][5883] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.264 [INFO][5892] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" HandleID="k8s-pod-network.ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.265 [INFO][5892] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.265 [INFO][5892] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.274 [WARNING][5892] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" HandleID="k8s-pod-network.ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.274 [INFO][5892] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" HandleID="k8s-pod-network.ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Workload="localhost-k8s-calico--apiserver--7999b6f797--z5ch8-eth0" Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.294 [INFO][5892] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:16.304018 containerd[1457]: 2026-04-21 10:22:16.301 [INFO][5883] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4" Apr 21 10:22:16.308949 containerd[1457]: time="2026-04-21T10:22:16.304280789Z" level=info msg="TearDown network for sandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\" successfully" Apr 21 10:22:16.315744 containerd[1457]: time="2026-04-21T10:22:16.315601016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:22:16.315744 containerd[1457]: time="2026-04-21T10:22:16.315843054Z" level=info msg="RemovePodSandbox \"ce1161a33b7affa3d274cbbae5aad7bd60bc1a529eaa966ceaade791b35429c4\" returns successfully" Apr 21 10:22:16.317451 containerd[1457]: time="2026-04-21T10:22:16.317418898Z" level=info msg="StopPodSandbox for \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\"" Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.452 [WARNING][5910] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0", GenerateName:"calico-apiserver-7999b6f797-", Namespace:"calico-system", SelfLink:"", UID:"386cc7c2-feea-4942-a60b-423727e06d40", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7999b6f797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215", Pod:"calico-apiserver-7999b6f797-gkbt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4f4a927720e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.453 [INFO][5910] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.453 [INFO][5910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" iface="eth0" netns="" Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.453 [INFO][5910] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.453 [INFO][5910] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.487 [INFO][5918] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" HandleID="k8s-pod-network.e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.490 [INFO][5918] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.491 [INFO][5918] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.520 [WARNING][5918] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" HandleID="k8s-pod-network.e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.532 [INFO][5918] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" HandleID="k8s-pod-network.e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.563 [INFO][5918] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:16.567048 containerd[1457]: 2026-04-21 10:22:16.565 [INFO][5910] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:22:16.568247 containerd[1457]: time="2026-04-21T10:22:16.567147048Z" level=info msg="TearDown network for sandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\" successfully" Apr 21 10:22:16.568247 containerd[1457]: time="2026-04-21T10:22:16.567169239Z" level=info msg="StopPodSandbox for \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\" returns successfully" Apr 21 10:22:16.569061 containerd[1457]: time="2026-04-21T10:22:16.569007293Z" level=info msg="RemovePodSandbox for \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\"" Apr 21 10:22:16.569126 containerd[1457]: time="2026-04-21T10:22:16.569071953Z" level=info msg="Forcibly stopping sandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\"" Apr 21 10:22:16.662213 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:43974.service - OpenSSH per-connection server daemon (10.0.0.1:43974). Apr 21 10:22:16.838854 sshd[5943]: Accepted publickey for core from 10.0.0.1 port 43974 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:16.846229 sshd[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:16.851733 systemd-logind[1444]: New session 15 of user core. Apr 21 10:22:16.856306 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:16.849 [WARNING][5938] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0", GenerateName:"calico-apiserver-7999b6f797-", Namespace:"calico-system", SelfLink:"", UID:"386cc7c2-feea-4942-a60b-423727e06d40", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7999b6f797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82fd8939dfb669118324bd47ea12f6e8eca34d1a1f3bbe0d22a1b8fb1e765215", Pod:"calico-apiserver-7999b6f797-gkbt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4f4a927720e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:16.850 [INFO][5938] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:16.850 [INFO][5938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" iface="eth0" netns="" Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:16.850 [INFO][5938] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:16.850 [INFO][5938] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:16.891 [INFO][5949] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" HandleID="k8s-pod-network.e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:16.892 [INFO][5949] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:16.892 [INFO][5949] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:16.975 [WARNING][5949] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" HandleID="k8s-pod-network.e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:16.976 [INFO][5949] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" HandleID="k8s-pod-network.e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Workload="localhost-k8s-calico--apiserver--7999b6f797--gkbt6-eth0" Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:17.027 [INFO][5949] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:17.036102 containerd[1457]: 2026-04-21 10:22:17.030 [INFO][5938] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203" Apr 21 10:22:17.039564 containerd[1457]: time="2026-04-21T10:22:17.035838386Z" level=info msg="TearDown network for sandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\" successfully" Apr 21 10:22:17.058519 containerd[1457]: time="2026-04-21T10:22:17.058269450Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:22:17.058519 containerd[1457]: time="2026-04-21T10:22:17.058607886Z" level=info msg="RemovePodSandbox \"e906f766d1b5832bf3939e8b945acd88755ffa89b46da7eb7f0a8e14fb3a9203\" returns successfully" Apr 21 10:22:17.061771 containerd[1457]: time="2026-04-21T10:22:17.061718568Z" level=info msg="StopPodSandbox for \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\"" Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.131 [WARNING][5974] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wgmqg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"220f4521-b6d9-4c5d-96ae-7597b58ee030", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e", Pod:"coredns-7d764666f9-wgmqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7472f81b665", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.132 [INFO][5974] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.132 [INFO][5974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" iface="eth0" netns="" Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.132 [INFO][5974] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.132 [INFO][5974] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.173 [INFO][5985] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" HandleID="k8s-pod-network.9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.174 [INFO][5985] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.174 [INFO][5985] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.192 [WARNING][5985] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" HandleID="k8s-pod-network.9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.192 [INFO][5985] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" HandleID="k8s-pod-network.9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.194 [INFO][5985] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:17.199368 containerd[1457]: 2026-04-21 10:22:17.196 [INFO][5974] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:22:17.199368 containerd[1457]: time="2026-04-21T10:22:17.199242392Z" level=info msg="TearDown network for sandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\" successfully" Apr 21 10:22:17.199368 containerd[1457]: time="2026-04-21T10:22:17.199264942Z" level=info msg="StopPodSandbox for \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\" returns successfully" Apr 21 10:22:17.200127 containerd[1457]: time="2026-04-21T10:22:17.199884302Z" level=info msg="RemovePodSandbox for \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\"" Apr 21 10:22:17.200127 containerd[1457]: time="2026-04-21T10:22:17.199979179Z" level=info msg="Forcibly stopping sandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\"" Apr 21 10:22:17.294239 sshd[5943]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:17.306719 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:43974.service: Deactivated successfully. Apr 21 10:22:17.308528 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:22:17.310075 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:22:17.316254 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:43988.service - OpenSSH per-connection server daemon (10.0.0.1:43988). Apr 21 10:22:17.317274 systemd-logind[1444]: Removed session 15. Apr 21 10:22:17.377520 sshd[6019]: Accepted publickey for core from 10.0.0.1 port 43988 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:17.379653 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:17.384728 systemd-logind[1444]: New session 16 of user core. Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.291 [WARNING][6001] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wgmqg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"220f4521-b6d9-4c5d-96ae-7597b58ee030", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 21, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51d2989aae8b4649d054462c7c9870b61385d71fe9d9a948376e989f2a4bbc8e", Pod:"coredns-7d764666f9-wgmqg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7472f81b665", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.292 [INFO][6001] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.292 [INFO][6001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" iface="eth0" netns="" Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.292 [INFO][6001] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.292 [INFO][6001] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.357 [INFO][6009] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" HandleID="k8s-pod-network.9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.357 [INFO][6009] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.357 [INFO][6009] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.383 [WARNING][6009] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" HandleID="k8s-pod-network.9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.384 [INFO][6009] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" HandleID="k8s-pod-network.9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Workload="localhost-k8s-coredns--7d764666f9--wgmqg-eth0" Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.385 [INFO][6009] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:17.389366 containerd[1457]: 2026-04-21 10:22:17.387 [INFO][6001] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab" Apr 21 10:22:17.389809 containerd[1457]: time="2026-04-21T10:22:17.389480363Z" level=info msg="TearDown network for sandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\" successfully" Apr 21 10:22:17.390117 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:22:17.393394 containerd[1457]: time="2026-04-21T10:22:17.393362115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:22:17.393471 containerd[1457]: time="2026-04-21T10:22:17.393427011Z" level=info msg="RemovePodSandbox \"9b55bc6b0cd5fb4b2358b5315ce8c30854bb425b1b7dff579a9b581bd48798ab\" returns successfully" Apr 21 10:22:17.767453 sshd[6019]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:17.796610 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:43988.service: Deactivated successfully. Apr 21 10:22:17.799072 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:22:17.800505 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:22:17.814674 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:43996.service - OpenSSH per-connection server daemon (10.0.0.1:43996). Apr 21 10:22:17.824062 systemd-logind[1444]: Removed session 16. Apr 21 10:22:17.991594 sshd[6035]: Accepted publickey for core from 10.0.0.1 port 43996 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:17.999215 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:18.006273 systemd-logind[1444]: New session 17 of user core. Apr 21 10:22:18.017229 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:22:20.715260 sshd[6035]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:20.779683 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:41114.service - OpenSSH per-connection server daemon (10.0.0.1:41114). Apr 21 10:22:20.780511 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:43996.service: Deactivated successfully. Apr 21 10:22:20.791666 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:22:20.792141 systemd[1]: session-17.scope: Consumed 2.378s CPU time. Apr 21 10:22:20.792836 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:22:20.799856 systemd-logind[1444]: Removed session 17. Apr 21 10:22:20.933024 sshd[6085]: Accepted publickey for core from 10.0.0.1 port 41114 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:20.947266 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:20.966544 systemd-logind[1444]: New session 18 of user core. Apr 21 10:22:20.978199 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:22:25.083833 sshd[6085]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:25.249578 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:41114.service: Deactivated successfully. Apr 21 10:22:25.273432 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:22:25.276770 systemd[1]: session-18.scope: Consumed 3.498s CPU time. Apr 21 10:22:25.296947 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:22:25.349081 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:41120.service - OpenSSH per-connection server daemon (10.0.0.1:41120). Apr 21 10:22:25.375735 systemd-logind[1444]: Removed session 18. Apr 21 10:22:25.583405 sshd[6104]: Accepted publickey for core from 10.0.0.1 port 41120 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:25.588538 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:25.638400 systemd-logind[1444]: New session 19 of user core. Apr 21 10:22:25.653040 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:22:26.292707 sshd[6104]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:26.296154 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:41120.service: Deactivated successfully. Apr 21 10:22:26.297807 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:22:26.298443 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:22:26.299362 systemd-logind[1444]: Removed session 19. Apr 21 10:22:31.308415 systemd[1]: Started sshd@19-10.0.0.60:22-10.0.0.1:51088.service - OpenSSH per-connection server daemon (10.0.0.1:51088). Apr 21 10:22:31.348932 kubelet[2508]: E0421 10:22:31.348663 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:22:31.476972 sshd[6135]: Accepted publickey for core from 10.0.0.1 port 51088 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:31.480571 sshd[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:31.488075 systemd-logind[1444]: New session 20 of user core. Apr 21 10:22:31.507738 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:22:31.987126 sshd[6135]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:31.990651 systemd[1]: sshd@19-10.0.0.60:22-10.0.0.1:51088.service: Deactivated successfully. Apr 21 10:22:32.000636 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:22:32.001374 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:22:32.002460 systemd-logind[1444]: Removed session 20. Apr 21 10:22:37.047411 systemd[1]: Started sshd@20-10.0.0.60:22-10.0.0.1:51102.service - OpenSSH per-connection server daemon (10.0.0.1:51102). Apr 21 10:22:37.180205 sshd[6169]: Accepted publickey for core from 10.0.0.1 port 51102 ssh2: RSA SHA256:rOdqR9EuP+9jLJgXgh/cjB6uIaLpTqUX3lhjm5Dabpg Apr 21 10:22:37.183490 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:22:37.199246 systemd-logind[1444]: New session 21 of user core. Apr 21 10:22:37.214991 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:22:37.462854 kubelet[2508]: I0421 10:22:37.462699 2508 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:38.031220 sshd[6169]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:38.034420 systemd[1]: sshd@20-10.0.0.60:22-10.0.0.1:51102.service: Deactivated successfully. Apr 21 10:22:38.043169 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:22:38.044299 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:22:38.045402 systemd-logind[1444]: Removed session 21. Apr 21 10:22:39.377615 kubelet[2508]: E0421 10:22:39.377535 2508 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"