Apr 17 23:50:52.016852 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:50:52.016947 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:50:52.016965 kernel: BIOS-provided physical RAM map: Apr 17 23:50:52.016974 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 17 23:50:52.016982 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 17 23:50:52.016991 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 23:50:52.017001 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 17 23:50:52.017010 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 17 23:50:52.017019 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:50:52.017030 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 23:50:52.017039 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 23:50:52.017048 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 23:50:52.017056 kernel: NX (Execute Disable) protection: active Apr 17 23:50:52.017065 kernel: APIC: Static calls initialized Apr 17 23:50:52.017077 kernel: SMBIOS 2.8 present. Apr 17 23:50:52.017089 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 17 23:50:52.017098 kernel: Hypervisor detected: KVM Apr 17 23:50:52.017107 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:50:52.017117 kernel: kvm-clock: using sched offset of 4007626353 cycles Apr 17 23:50:52.017127 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:50:52.017137 kernel: tsc: Detected 2793.438 MHz processor Apr 17 23:50:52.017146 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:50:52.017155 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:50:52.017163 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 17 23:50:52.017172 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 23:50:52.017179 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:50:52.017187 kernel: Using GB pages for direct mapping Apr 17 23:50:52.017194 kernel: ACPI: Early table checksum verification disabled Apr 17 23:50:52.017202 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 17 23:50:52.017211 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:52.017220 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:52.017230 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:52.017239 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 17 23:50:52.017252 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:52.017262 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:52.017271 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:52.017281 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:50:52.017291 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 17 23:50:52.017300 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 17 23:50:52.017310 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 17 23:50:52.017327 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 17 23:50:52.017337 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 17 23:50:52.017347 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 17 23:50:52.017357 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 17 23:50:52.017367 kernel: No NUMA configuration found Apr 17 23:50:52.017377 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 17 23:50:52.017388 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 17 23:50:52.017400 kernel: Zone ranges: Apr 17 23:50:52.017410 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:50:52.017420 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 17 23:50:52.017430 kernel: Normal empty Apr 17 23:50:52.017441 kernel: Movable zone start for each node Apr 17 23:50:52.017451 kernel: Early memory node ranges Apr 17 23:50:52.017494 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 23:50:52.017505 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 17 23:50:52.017515 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 17 23:50:52.017525 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:50:52.017538 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 23:50:52.017549 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 17 23:50:52.017559 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:50:52.017569 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:50:52.017580 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:50:52.017590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:50:52.017601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:50:52.017611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:50:52.017621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:50:52.017634 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:50:52.017644 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:50:52.017654 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:50:52.017664 kernel: TSC deadline timer available Apr 17 23:50:52.017675 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 17 23:50:52.017685 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:50:52.017695 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:50:52.017705 kernel: kvm-guest: setup PV sched yield Apr 17 23:50:52.017716 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 23:50:52.017728 kernel: Booting paravirtualized kernel on KVM Apr 17 23:50:52.017738 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:50:52.017749 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 23:50:52.017760 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 17 23:50:52.017770 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 17 23:50:52.017780 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 23:50:52.017818 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:50:52.017829 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:50:52.017841 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:50:52.017854 kernel: random: crng init done Apr 17 23:50:52.017864 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:50:52.018203 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:50:52.018212 kernel: Fallback order for Node 0: 0 Apr 17 23:50:52.018220 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 17 23:50:52.018228 kernel: Policy zone: DMA32 Apr 17 23:50:52.018236 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:50:52.018245 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 137896K reserved, 0K cma-reserved) Apr 17 23:50:52.018254 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 23:50:52.018268 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:50:52.018277 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:50:52.018285 kernel: Dynamic Preempt: voluntary Apr 17 23:50:52.018293 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:50:52.018302 kernel: rcu: RCU event tracing is enabled. Apr 17 23:50:52.018311 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 23:50:52.018321 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:50:52.018330 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:50:52.018338 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:50:52.018349 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:50:52.018357 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 23:50:52.018365 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 23:50:52.018374 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:50:52.018383 kernel: Console: colour VGA+ 80x25 Apr 17 23:50:52.018392 kernel: printk: console [ttyS0] enabled Apr 17 23:50:52.018401 kernel: ACPI: Core revision 20230628 Apr 17 23:50:52.018410 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:50:52.018419 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:50:52.018431 kernel: x2apic enabled Apr 17 23:50:52.018440 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:50:52.018449 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:50:52.018500 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:50:52.018509 kernel: kvm-guest: setup PV IPIs Apr 17 23:50:52.018518 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:50:52.018527 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:50:52.018548 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 23:50:52.018559 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:50:52.018570 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 23:50:52.018579 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 23:50:52.018590 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:50:52.018599 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:50:52.018609 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:50:52.018619 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:50:52.018629 kernel: RETBleed: Vulnerable Apr 17 23:50:52.018640 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:50:52.018650 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:50:52.018659 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:50:52.018669 kernel: active return thunk: its_return_thunk Apr 17 23:50:52.018678 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:50:52.018688 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:50:52.018697 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:50:52.018707 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:50:52.018716 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:50:52.018728 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:50:52.018737 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:50:52.018746 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:50:52.018756 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:50:52.018765 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:50:52.018774 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:50:52.018784 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:50:52.018793 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:50:52.018802 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:50:52.018814 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:50:52.018822 kernel: landlock: Up and running. Apr 17 23:50:52.018830 kernel: SELinux: Initializing. Apr 17 23:50:52.018839 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:50:52.018848 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:50:52.018856 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 23:50:52.018865 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:50:52.018874 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:50:52.019008 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:50:52.019022 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 23:50:52.019032 kernel: signal: max sigframe size: 3632 Apr 17 23:50:52.019041 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:50:52.019051 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:50:52.019061 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:50:52.019070 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:50:52.019079 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:50:52.019088 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 23:50:52.019098 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 23:50:52.019109 kernel: smpboot: Max logical packages: 1 Apr 17 23:50:52.019118 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 23:50:52.019128 kernel: devtmpfs: initialized Apr 17 23:50:52.019137 kernel: x86/mm: Memory block size: 128MB Apr 17 23:50:52.019146 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:50:52.019155 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 23:50:52.019164 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:50:52.019172 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:50:52.019180 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:50:52.019192 kernel: audit: type=2000 audit(1776469851.037:1): state=initialized audit_enabled=0 res=1 Apr 17 23:50:52.019201 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:50:52.019209 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:50:52.019217 kernel: cpuidle: using governor menu Apr 17 23:50:52.019226 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:50:52.019235 kernel: dca service started, version 1.12.1 Apr 17 23:50:52.019243 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:50:52.019252 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:50:52.019260 kernel: PCI: Using configuration type 1 for base access Apr 17 23:50:52.019271 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:50:52.019280 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:50:52.019289 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:50:52.019298 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:50:52.019307 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:50:52.019316 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:50:52.019325 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:50:52.019334 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:50:52.019344 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:50:52.019357 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:50:52.019366 kernel: ACPI: Interpreter enabled Apr 17 23:50:52.019375 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:50:52.019384 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:50:52.019392 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:50:52.019401 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:50:52.019410 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:50:52.019419 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:50:52.019618 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:50:52.019723 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:50:52.020182 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:50:52.020200 kernel: PCI host bridge to bus 0000:00 Apr 17 23:50:52.020301 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:50:52.020389 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:50:52.020521 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:50:52.020606 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 23:50:52.020681 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:50:52.020758 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 17 23:50:52.020836 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:50:52.021034 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:50:52.021137 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:50:52.021229 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 17 23:50:52.021319 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 17 23:50:52.021405 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 17 23:50:52.021549 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:50:52.021645 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:50:52.021736 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 17 23:50:52.021826 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 17 23:50:52.022023 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 23:50:52.022186 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 17 23:50:52.022502 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 17 23:50:52.022603 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 17 23:50:52.022700 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 23:50:52.023012 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:50:52.023141 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 17 23:50:52.023239 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 17 23:50:52.023331 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 17 23:50:52.023421 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 17 23:50:52.023576 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:50:52.023673 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:50:52.023776 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:50:52.023870 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 17 23:50:52.024044 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 17 23:50:52.024177 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:50:52.024353 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 17 23:50:52.024369 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:50:52.024381 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:50:52.024391 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:50:52.024402 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:50:52.024418 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:50:52.024429 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:50:52.024440 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:50:52.024545 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:50:52.024556 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:50:52.024567 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:50:52.024605 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:50:52.024616 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:50:52.024627 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:50:52.024643 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:50:52.024681 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:50:52.024692 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:50:52.024703 kernel: iommu: Default domain type: Translated Apr 17 23:50:52.024715 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:50:52.024726 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:50:52.024737 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:50:52.024748 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 17 23:50:52.024759 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 17 23:50:52.024869 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:50:52.025047 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:50:52.025141 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:50:52.025154 kernel: vgaarb: loaded Apr 17 23:50:52.025163 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:50:52.025171 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:50:52.025179 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:50:52.025188 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:50:52.025197 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:50:52.025210 kernel: pnp: PnP ACPI init Apr 17 23:50:52.025402 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:50:52.025421 kernel: pnp: PnP ACPI: found 6 devices Apr 17 23:50:52.025492 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:50:52.025504 kernel: NET: Registered PF_INET protocol family Apr 17 23:50:52.025515 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:50:52.025527 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:50:52.025538 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:50:52.025553 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:50:52.025565 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:50:52.025576 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:50:52.025587 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:50:52.025598 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:50:52.025610 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:50:52.025621 kernel: NET: Registered PF_XDP protocol family Apr 17 23:50:52.025718 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:50:52.025804 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:50:52.025957 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:50:52.026045 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 23:50:52.026131 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:50:52.026207 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 17 23:50:52.026221 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:50:52.026231 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:50:52.026241 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:50:52.026250 kernel: Initialise system trusted keyrings Apr 17 23:50:52.026263 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:50:52.026272 kernel: Key type asymmetric registered Apr 17 23:50:52.026281 kernel: Asymmetric key parser 'x509' registered Apr 17 23:50:52.026289 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:50:52.026298 kernel: io scheduler mq-deadline registered Apr 17 23:50:52.026307 kernel: io scheduler kyber registered Apr 17 23:50:52.026316 kernel: io scheduler bfq registered Apr 17 23:50:52.026325 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:50:52.026334 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:50:52.026346 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:50:52.026355 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 23:50:52.026364 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:50:52.026374 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:50:52.026383 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:50:52.026392 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:50:52.026401 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:50:52.026410 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:50:52.026572 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 23:50:52.026740 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 23:50:52.026825 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T23:50:51 UTC (1776469851) Apr 17 23:50:52.026996 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 23:50:52.027011 kernel: intel_pstate: CPU model not supported Apr 17 23:50:52.027021 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:50:52.027032 kernel: Segment Routing with IPv6 Apr 17 23:50:52.027042 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:50:52.027052 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:50:52.027066 kernel: Key type dns_resolver registered Apr 17 23:50:52.027075 kernel: IPI shorthand broadcast: enabled Apr 17 23:50:52.027086 kernel: sched_clock: Marking stable (1303023945, 185640102)->(1541924522, -53260475) Apr 17 23:50:52.027095 kernel: registered taskstats version 1 Apr 17 23:50:52.027104 kernel: Loading compiled-in X.509 certificates Apr 17 23:50:52.027115 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:50:52.027124 kernel: Key type .fscrypt registered Apr 17 23:50:52.027134 kernel: Key type fscrypt-provisioning registered Apr 17 23:50:52.027143 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:50:52.027155 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:50:52.027164 kernel: ima: No architecture policies found Apr 17 23:50:52.027172 kernel: clk: Disabling unused clocks Apr 17 23:50:52.027180 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:50:52.027188 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:50:52.027197 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:50:52.027206 kernel: Run /init as init process Apr 17 23:50:52.027214 kernel: with arguments: Apr 17 23:50:52.027223 kernel: /init Apr 17 23:50:52.027234 kernel: with environment: Apr 17 23:50:52.027244 kernel: HOME=/ Apr 17 23:50:52.027255 kernel: TERM=linux Apr 17 23:50:52.027270 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:50:52.027285 systemd[1]: Detected virtualization kvm. Apr 17 23:50:52.027297 systemd[1]: Detected architecture x86-64. Apr 17 23:50:52.027307 systemd[1]: Running in initrd. Apr 17 23:50:52.027319 systemd[1]: No hostname configured, using default hostname. Apr 17 23:50:52.027333 systemd[1]: Hostname set to . Apr 17 23:50:52.027345 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:50:52.027357 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:50:52.027367 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:50:52.027380 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:50:52.027393 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:50:52.027405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:50:52.027417 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:50:52.027432 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:50:52.027668 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:50:52.027679 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:50:52.027689 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:50:52.027698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:50:52.027712 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:50:52.027723 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:50:52.027733 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:50:52.027743 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:50:52.027753 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:50:52.027763 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:50:52.027774 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:50:52.027784 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:50:52.027796 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:50:52.027806 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:50:52.027817 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:50:52.027827 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:50:52.027837 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:50:52.027846 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:50:52.027856 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:50:52.027865 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:50:52.027875 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:50:52.027945 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:50:52.027956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:50:52.027967 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:50:52.027977 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:50:52.028012 systemd-journald[195]: Collecting audit messages is disabled. Apr 17 23:50:52.028042 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:50:52.028060 systemd-journald[195]: Journal started Apr 17 23:50:52.028087 systemd-journald[195]: Runtime Journal (/run/log/journal/1ead3a072ae94a578a6952f7567528ac) is 6.0M, max 48.4M, 42.3M free. Apr 17 23:50:52.037365 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:50:52.023107 systemd-modules-load[196]: Inserted module 'overlay' Apr 17 23:50:52.217195 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:50:52.217238 kernel: Bridge firewalling registered Apr 17 23:50:52.217253 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:50:52.059552 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 17 23:50:52.219966 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:50:52.224963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:50:52.230534 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:50:52.248175 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:50:52.255043 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:50:52.256393 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:50:52.259048 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:50:52.264799 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:50:52.269693 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:50:52.277277 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:50:52.280727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:50:52.289647 dracut-cmdline[228]: dracut-dracut-053 Apr 17 23:50:52.292500 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:50:52.304433 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:50:52.314219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:50:52.338819 systemd-resolved[248]: Positive Trust Anchors: Apr 17 23:50:52.338856 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:50:52.338923 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:50:52.340982 systemd-resolved[248]: Defaulting to hostname 'linux'. Apr 17 23:50:52.341709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:50:52.348649 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:50:52.408090 kernel: SCSI subsystem initialized Apr 17 23:50:52.416974 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:50:52.430049 kernel: iscsi: registered transport (tcp) Apr 17 23:50:52.450406 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:50:52.450542 kernel: QLogic iSCSI HBA Driver Apr 17 23:50:52.484861 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:50:52.500081 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:50:52.526057 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:50:52.526108 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:50:52.529065 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:50:52.569993 kernel: raid6: avx512x4 gen() 46062 MB/s Apr 17 23:50:52.587988 kernel: raid6: avx512x2 gen() 45659 MB/s Apr 17 23:50:52.606067 kernel: raid6: avx512x1 gen() 42369 MB/s Apr 17 23:50:52.624108 kernel: raid6: avx2x4 gen() 37215 MB/s Apr 17 23:50:52.642059 kernel: raid6: avx2x2 gen() 36064 MB/s Apr 17 23:50:52.661078 kernel: raid6: avx2x1 gen() 26699 MB/s Apr 17 23:50:52.661155 kernel: raid6: using algorithm avx512x4 gen() 46062 MB/s Apr 17 23:50:52.681096 kernel: raid6: .... xor() 10169 MB/s, rmw enabled Apr 17 23:50:52.681185 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:50:52.701045 kernel: xor: automatically using best checksumming function avx Apr 17 23:50:52.846107 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:50:52.855964 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:50:52.871118 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:50:52.881493 systemd-udevd[417]: Using default interface naming scheme 'v255'. Apr 17 23:50:52.884244 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:50:52.885290 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:50:52.907147 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Apr 17 23:50:52.932578 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:50:52.948072 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:50:52.979041 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:50:52.988056 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:50:53.001185 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:50:53.004763 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:50:53.014001 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:50:53.017588 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:50:53.036069 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 23:50:53.038509 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:50:53.058308 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 23:50:53.058496 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:50:53.058517 kernel: GPT:9289727 != 19775487 Apr 17 23:50:53.058526 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:50:53.058535 kernel: GPT:9289727 != 19775487 Apr 17 23:50:53.058543 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:50:53.058552 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:50:53.044542 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:50:53.044615 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:50:53.064125 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:50:53.070142 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:50:53.070310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:50:53.073799 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:50:53.085604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:50:53.095770 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:50:53.103223 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:50:53.113398 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 23:50:53.113961 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/vda3 scanned by (udev-worker) (478) Apr 17 23:50:53.118378 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 23:50:53.134921 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Apr 17 23:50:53.134956 kernel: libata version 3.00 loaded. Apr 17 23:50:53.141952 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:50:53.145951 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:50:53.145982 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:50:53.146102 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:50:53.147038 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 23:50:53.361616 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:50:53.361637 kernel: AES CTR mode by8 optimization enabled Apr 17 23:50:53.361645 kernel: scsi host0: ahci Apr 17 23:50:53.361772 kernel: scsi host1: ahci Apr 17 23:50:53.361850 kernel: scsi host2: ahci Apr 17 23:50:53.361993 kernel: scsi host3: ahci Apr 17 23:50:53.362061 kernel: scsi host4: ahci Apr 17 23:50:53.362132 kernel: scsi host5: ahci Apr 17 23:50:53.362200 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 33 Apr 17 23:50:53.362208 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 33 Apr 17 23:50:53.362215 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 33 Apr 17 23:50:53.362225 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 33 Apr 17 23:50:53.362232 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 33 Apr 17 23:50:53.362239 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 33 Apr 17 23:50:53.347224 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 23:50:53.347540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:50:53.375012 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:50:53.352110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:50:53.379980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:50:53.363168 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:50:53.384929 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:50:53.384942 disk-uuid[557]: Primary Header is updated. Apr 17 23:50:53.384942 disk-uuid[557]: Secondary Entries is updated. Apr 17 23:50:53.384942 disk-uuid[557]: Secondary Header is updated. Apr 17 23:50:53.403233 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:50:53.425806 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:50:53.462005 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:53.465947 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:53.465969 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:53.470787 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:53.470848 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 23:50:53.474424 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:50:53.476935 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 23:50:53.476962 kernel: ata3.00: applying bridge limits Apr 17 23:50:53.480054 kernel: ata3.00: configured for UDMA/100 Apr 17 23:50:53.482944 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:50:53.550974 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 23:50:53.551665 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:50:53.568956 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:50:54.386197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:50:54.386288 disk-uuid[558]: The operation has completed successfully. Apr 17 23:50:54.408567 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:50:54.408679 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:50:54.435317 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:50:54.441789 sh[597]: Success Apr 17 23:50:54.455023 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:50:54.485673 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:50:54.510353 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:50:54.512931 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:50:54.529790 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:50:54.529807 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:50:54.529815 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:50:54.529823 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:50:54.533450 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:50:54.540082 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:50:54.540552 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:50:54.563071 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:50:54.569817 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:50:54.581262 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:50:54.581292 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:50:54.581301 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:50:54.588128 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:50:54.596525 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:50:54.601331 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:50:54.607759 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:50:54.616067 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:50:54.667546 ignition[695]: Ignition 2.19.0 Apr 17 23:50:54.667575 ignition[695]: Stage: fetch-offline Apr 17 23:50:54.667597 ignition[695]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:54.667604 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:50:54.667691 ignition[695]: parsed url from cmdline: "" Apr 17 23:50:54.667694 ignition[695]: no config URL provided Apr 17 23:50:54.667697 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:50:54.667702 ignition[695]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:50:54.667721 ignition[695]: op(1): [started] loading QEMU firmware config module Apr 17 23:50:54.667724 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 23:50:54.674003 ignition[695]: op(1): [finished] loading QEMU firmware config module Apr 17 23:50:54.711411 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:50:54.723172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:50:54.740463 systemd-networkd[786]: lo: Link UP Apr 17 23:50:54.740520 systemd-networkd[786]: lo: Gained carrier Apr 17 23:50:54.741394 systemd-networkd[786]: Enumeration completed Apr 17 23:50:54.742003 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:50:54.743557 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:50:54.743559 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:50:54.744840 systemd-networkd[786]: eth0: Link UP Apr 17 23:50:54.744842 systemd-networkd[786]: eth0: Gained carrier Apr 17 23:50:54.744847 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:50:54.745689 systemd[1]: Reached target network.target - Network. Apr 17 23:50:54.829992 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:50:54.927188 ignition[695]: parsing config with SHA512: 28f1df16bb1bd08c8293b6bb9e441657190a1d08855be3bf49815b93d1aacf1e1d429180db510611f43439ed9abeb1d448831d006801d8266b699636e9c44e8f Apr 17 23:50:54.935183 unknown[695]: fetched base config from "system" Apr 17 23:50:54.935192 unknown[695]: fetched user config from "qemu" Apr 17 23:50:54.935564 ignition[695]: fetch-offline: fetch-offline passed Apr 17 23:50:54.935612 ignition[695]: Ignition finished successfully Apr 17 23:50:54.945267 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:50:54.945582 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 23:50:54.959149 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:50:54.975258 ignition[791]: Ignition 2.19.0 Apr 17 23:50:54.975282 ignition[791]: Stage: kargs Apr 17 23:50:54.975413 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:54.975420 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:50:54.976181 ignition[791]: kargs: kargs passed Apr 17 23:50:54.976210 ignition[791]: Ignition finished successfully Apr 17 23:50:54.989444 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:50:55.002258 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:50:55.015924 ignition[799]: Ignition 2.19.0 Apr 17 23:50:55.015933 ignition[799]: Stage: disks Apr 17 23:50:55.017670 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:50:55.016061 ignition[799]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:55.023210 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:50:55.016069 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:50:55.028782 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:50:55.016767 ignition[799]: disks: disks passed Apr 17 23:50:55.034188 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:50:55.016798 ignition[799]: Ignition finished successfully Apr 17 23:50:55.039615 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:50:55.045077 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:50:55.056137 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:50:55.067840 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:50:55.072261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:50:55.093073 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:50:55.200967 kernel: EXT4-fs (vda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:50:55.201211 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:50:55.205964 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:50:55.218065 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:50:55.223238 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:50:55.223570 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:50:55.237411 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Apr 17 23:50:55.223599 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:50:55.252402 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:50:55.252950 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:50:55.252963 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:50:55.252971 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:50:55.223615 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:50:55.257022 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:50:55.257200 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:50:55.266125 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:50:55.310568 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:50:55.317979 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:50:55.322299 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:50:55.328568 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:50:55.421295 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:50:55.432007 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:50:55.437812 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:50:55.443942 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:50:55.465710 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:50:55.470835 ignition[930]: INFO : Ignition 2.19.0 Apr 17 23:50:55.470835 ignition[930]: INFO : Stage: mount Apr 17 23:50:55.474686 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:55.474686 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:50:55.474686 ignition[930]: INFO : mount: mount passed Apr 17 23:50:55.474686 ignition[930]: INFO : Ignition finished successfully Apr 17 23:50:55.485331 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:50:55.496167 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:50:55.523057 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:50:55.531202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:50:55.539960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Apr 17 23:50:55.545445 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:50:55.545527 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:50:55.545538 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:50:55.553987 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:50:55.555070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:50:55.589729 ignition[962]: INFO : Ignition 2.19.0 Apr 17 23:50:55.589729 ignition[962]: INFO : Stage: files Apr 17 23:50:55.589729 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:55.597729 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:50:55.597729 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:50:55.597729 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:50:55.597729 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:50:55.597729 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:50:55.597729 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:50:55.597729 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:50:55.597729 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:50:55.597729 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:50:55.597729 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:50:55.597729 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:50:55.594874 unknown[962]: wrote ssh authorized keys file for user: core Apr 17 23:50:55.674772 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:50:55.829645 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:50:55.829645 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:50:55.839639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:50:55.844746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:50:56.151568 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:50:56.182355 systemd-networkd[786]: eth0: Gained IPv6LL Apr 17 23:50:56.559301 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:50:56.559301 ignition[962]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 17 23:50:56.568825 ignition[962]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 23:50:56.638110 ignition[962]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:50:56.647367 ignition[962]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:50:56.651754 ignition[962]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 23:50:56.651754 ignition[962]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:50:56.651754 ignition[962]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:50:56.651754 ignition[962]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:50:56.651754 ignition[962]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:50:56.651754 ignition[962]: INFO : files: files passed Apr 17 23:50:56.651754 ignition[962]: INFO : Ignition finished successfully Apr 17 23:50:56.675001 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:50:56.691180 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:50:56.698034 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:50:56.698298 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:50:56.698384 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:50:56.720694 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 23:50:56.727177 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:50:56.727177 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:50:56.731088 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:50:56.730026 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:50:56.745389 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:50:56.760257 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:50:56.782569 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:50:56.785234 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:50:56.785476 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:50:56.791456 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:50:56.797149 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:50:56.797868 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:50:56.812819 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:50:56.813933 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:50:56.827641 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:50:56.827822 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:50:56.833463 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:50:56.839316 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:50:56.839419 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:50:56.849015 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:50:56.852198 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:50:56.861668 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:50:56.861981 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:50:56.873032 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:50:56.873238 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:50:56.883468 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:50:56.889745 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:50:56.889989 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:50:56.894864 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:50:56.900066 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:50:56.900221 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:50:56.911543 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:50:56.911723 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:50:56.917008 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:50:56.925177 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:50:56.931653 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:50:56.931803 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:50:56.939535 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:50:56.939804 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:50:56.948311 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:50:56.948447 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:50:56.952990 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:50:56.962127 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:50:56.964593 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:50:56.967120 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:50:56.967220 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:50:56.969567 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:50:56.969620 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:50:56.974151 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:50:56.974257 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:50:56.978689 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:50:56.978793 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:50:57.014173 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:50:57.014267 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:50:57.014343 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:50:57.024648 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:50:57.027864 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:50:57.028021 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:50:57.032411 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:50:57.032539 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:50:57.048061 ignition[1017]: INFO : Ignition 2.19.0 Apr 17 23:50:57.048061 ignition[1017]: INFO : Stage: umount Apr 17 23:50:57.055171 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:50:57.055171 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:50:57.055171 ignition[1017]: INFO : umount: umount passed Apr 17 23:50:57.055171 ignition[1017]: INFO : Ignition finished successfully Apr 17 23:50:57.055355 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:50:57.055456 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:50:57.061853 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:50:57.063539 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:50:57.063629 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:50:57.089561 systemd[1]: Stopped target network.target - Network. Apr 17 23:50:57.095090 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:50:57.095187 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:50:57.102695 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:50:57.102770 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:50:57.107595 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:50:57.107634 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:50:57.109999 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:50:57.110032 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:50:57.114937 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:50:57.119956 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:50:57.135803 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:50:57.135988 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:50:57.143714 systemd-networkd[786]: eth0: DHCPv6 lease lost Apr 17 23:50:57.144225 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:50:57.144332 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:50:57.146257 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:50:57.146313 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:50:57.150669 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:50:57.150703 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:50:57.169809 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:50:57.169994 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:50:57.172649 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:50:57.172672 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:50:57.190064 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:50:57.190162 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:50:57.190197 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:50:57.195083 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:50:57.195115 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:50:57.203449 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:50:57.203481 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:50:57.209816 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:50:57.226976 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:50:57.227113 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:50:57.236569 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:50:57.236862 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:50:57.240086 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:50:57.240114 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:50:57.246614 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:50:57.246647 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:50:57.252311 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:50:57.252347 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:50:57.263117 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:50:57.263155 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:50:57.270853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:50:57.270938 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:50:57.296148 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:50:57.299256 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:50:57.299300 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:50:57.302571 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:50:57.302601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:50:57.302947 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:50:57.303030 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:50:57.312025 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:50:57.315627 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:50:57.327307 systemd[1]: Switching root. Apr 17 23:50:57.361044 systemd-journald[195]: Journal stopped Apr 17 23:50:58.260417 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Apr 17 23:50:58.260466 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:50:58.260479 kernel: SELinux: policy capability open_perms=1 Apr 17 23:50:58.260487 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:50:58.260530 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:50:58.260538 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:50:58.260546 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:50:58.260556 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:50:58.260564 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:50:58.260573 kernel: audit: type=1403 audit(1776469857.516:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:50:58.260584 systemd[1]: Successfully loaded SELinux policy in 38.650ms. Apr 17 23:50:58.260599 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.734ms. Apr 17 23:50:58.260608 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:50:58.260616 systemd[1]: Detected virtualization kvm. Apr 17 23:50:58.260624 systemd[1]: Detected architecture x86-64. Apr 17 23:50:58.260631 systemd[1]: Detected first boot. Apr 17 23:50:58.260641 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:50:58.260649 zram_generator::config[1078]: No configuration found. Apr 17 23:50:58.260661 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:50:58.260668 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:50:58.260677 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 23:50:58.260685 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:50:58.260693 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:50:58.260700 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:50:58.260708 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:50:58.260716 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:50:58.261032 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:50:58.261046 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:50:58.261054 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:50:58.261061 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:50:58.261069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:50:58.261077 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:50:58.261084 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:50:58.261092 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:50:58.261100 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:50:58.261110 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:50:58.261118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:50:58.261128 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:50:58.261136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:50:58.261147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:50:58.261155 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:50:58.261163 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:50:58.261170 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:50:58.261179 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:50:58.261186 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:50:58.261194 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:50:58.261202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:50:58.261209 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:50:58.261217 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:50:58.261224 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:50:58.261232 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:50:58.261240 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:50:58.261249 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:50:58.261257 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:50:58.261264 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:50:58.261272 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:50:58.261279 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:50:58.261287 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:50:58.261297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:50:58.261305 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:50:58.261312 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:50:58.261322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:50:58.261329 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:50:58.261337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:50:58.261345 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:50:58.261352 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:50:58.261360 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:50:58.261371 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 17 23:50:58.261379 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 17 23:50:58.261387 kernel: fuse: init (API version 7.39) Apr 17 23:50:58.261396 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:50:58.261403 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:50:58.261410 kernel: ACPI: bus type drm_connector registered Apr 17 23:50:58.261417 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:50:58.261425 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:50:58.261449 systemd-journald[1168]: Collecting audit messages is disabled. Apr 17 23:50:58.261466 systemd-journald[1168]: Journal started Apr 17 23:50:58.261484 systemd-journald[1168]: Runtime Journal (/run/log/journal/1ead3a072ae94a578a6952f7567528ac) is 6.0M, max 48.4M, 42.3M free. Apr 17 23:50:58.268022 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:50:58.272935 kernel: loop: module loaded Apr 17 23:50:58.272965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:50:58.277931 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:50:58.282835 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:50:58.285953 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:50:58.289060 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:50:58.291724 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:50:58.294692 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:50:58.297660 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:50:58.300393 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:50:58.303722 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:50:58.307538 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:50:58.307721 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:50:58.311161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:50:58.311302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:50:58.314731 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:50:58.314862 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:50:58.318217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:50:58.318360 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:50:58.321789 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:50:58.321968 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:50:58.325355 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:50:58.325616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:50:58.328728 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:50:58.332125 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:50:58.335573 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:50:58.339170 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:50:58.350376 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:50:58.358066 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:50:58.362072 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:50:58.365027 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:50:58.367243 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:50:58.371210 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:50:58.374244 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:50:58.375722 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:50:58.378709 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:50:58.381711 systemd-journald[1168]: Time spent on flushing to /var/log/journal/1ead3a072ae94a578a6952f7567528ac is 18.832ms for 939 entries. Apr 17 23:50:58.381711 systemd-journald[1168]: System Journal (/var/log/journal/1ead3a072ae94a578a6952f7567528ac) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:50:58.417054 systemd-journald[1168]: Received client request to flush runtime journal. Apr 17 23:50:58.382028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:50:58.385395 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:50:58.388474 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:50:58.393813 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:50:58.397359 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:50:58.400677 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:50:58.407266 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:50:58.411075 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:50:58.418865 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:50:58.427087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:50:58.431837 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 17 23:50:58.431869 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 17 23:50:58.435553 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:50:58.448337 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:50:58.471382 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:50:58.481136 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:50:58.492873 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Apr 17 23:50:58.493047 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Apr 17 23:50:58.496198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:50:58.749669 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:50:58.761129 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:50:58.783009 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Apr 17 23:50:58.801359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:50:58.810050 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:50:58.822160 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:50:58.838301 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 17 23:50:58.841031 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1264) Apr 17 23:50:58.857586 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:50:58.885711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:50:58.899985 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 17 23:50:58.913941 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:50:58.922549 systemd-networkd[1255]: lo: Link UP Apr 17 23:50:58.922678 systemd-networkd[1255]: lo: Gained carrier Apr 17 23:50:58.924999 systemd-networkd[1255]: Enumeration completed Apr 17 23:50:58.925768 systemd-networkd[1255]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:50:58.925770 systemd-networkd[1255]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:50:58.926090 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:50:58.933256 systemd-networkd[1255]: eth0: Link UP Apr 17 23:50:58.934149 systemd-networkd[1255]: eth0: Gained carrier Apr 17 23:50:58.934327 systemd-networkd[1255]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:50:58.941950 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 23:50:58.942052 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:50:58.955213 systemd-networkd[1255]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:50:59.040471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:50:59.042173 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:50:59.042348 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:50:59.042360 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:50:59.046429 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:50:59.162823 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:50:59.344046 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:50:59.348543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:50:59.352298 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:50:59.381666 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:50:59.385466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:50:59.393113 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:50:59.397958 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:50:59.438258 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:50:59.442455 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:50:59.446033 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:50:59.446079 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:50:59.448862 systemd[1]: Reached target machines.target - Containers. Apr 17 23:50:59.452806 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:50:59.467044 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:50:59.471557 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:50:59.474428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:50:59.475239 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:50:59.479612 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:50:59.484282 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:50:59.487819 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:50:59.493393 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:50:59.509014 kernel: loop0: detected capacity change from 0 to 228704 Apr 17 23:50:59.515548 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:50:59.516125 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:50:59.525964 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:50:59.555992 kernel: loop1: detected capacity change from 0 to 140768 Apr 17 23:50:59.601019 kernel: loop2: detected capacity change from 0 to 142488 Apr 17 23:50:59.644947 kernel: loop3: detected capacity change from 0 to 228704 Apr 17 23:50:59.659052 kernel: loop4: detected capacity change from 0 to 140768 Apr 17 23:50:59.673944 kernel: loop5: detected capacity change from 0 to 142488 Apr 17 23:50:59.683147 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 23:50:59.683452 (sd-merge)[1312]: Merged extensions into '/usr'. Apr 17 23:50:59.686638 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:50:59.686671 systemd[1]: Reloading... Apr 17 23:50:59.728009 zram_generator::config[1340]: No configuration found. Apr 17 23:50:59.734874 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:50:59.825250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:50:59.868599 systemd[1]: Reloading finished in 181 ms. Apr 17 23:50:59.885367 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:50:59.889078 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:50:59.907318 systemd[1]: Starting ensure-sysext.service... Apr 17 23:50:59.910469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:50:59.915224 systemd[1]: Reloading requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:50:59.915256 systemd[1]: Reloading... Apr 17 23:50:59.929968 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:50:59.930325 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:50:59.931253 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:50:59.931577 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Apr 17 23:50:59.931668 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Apr 17 23:50:59.934148 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:50:59.934178 systemd-tmpfiles[1386]: Skipping /boot Apr 17 23:50:59.941593 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:50:59.941628 systemd-tmpfiles[1386]: Skipping /boot Apr 17 23:50:59.964236 zram_generator::config[1414]: No configuration found. Apr 17 23:51:00.057229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:51:00.097143 systemd[1]: Reloading finished in 181 ms. Apr 17 23:51:00.114670 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:51:00.132954 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:51:00.137792 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:51:00.142461 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:51:00.148720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:51:00.153025 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:51:00.162007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:51:00.162143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:51:00.164240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:51:00.170109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:51:00.174777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:51:00.178144 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:51:00.178232 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:51:00.179707 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:51:00.184257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:51:00.184369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:51:00.188116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:51:00.188317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:51:00.192219 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:51:00.192401 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:51:00.202168 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:51:00.207064 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:51:00.209976 augenrules[1494]: No rules Apr 17 23:51:00.211084 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:51:00.216402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:51:00.216631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:51:00.223189 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:51:00.224054 systemd-resolved[1465]: Positive Trust Anchors: Apr 17 23:51:00.224083 systemd-resolved[1465]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:51:00.224108 systemd-resolved[1465]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:51:00.226988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:51:00.227866 systemd-resolved[1465]: Defaulting to hostname 'linux'. Apr 17 23:51:00.232000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:51:00.235028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:51:00.238038 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:51:00.241061 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:51:00.241192 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:51:00.241851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:51:00.245462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:51:00.245675 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:51:00.249122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:51:00.260239 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:51:00.264217 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:51:00.264351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:51:00.270872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:51:00.275686 systemd[1]: Reached target network.target - Network. Apr 17 23:51:00.278149 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:51:00.281356 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:51:00.281490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:51:00.290242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:51:00.294291 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:51:00.297738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:51:00.304035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:51:00.306871 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:51:00.307005 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:51:00.307021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:51:00.307730 systemd[1]: Finished ensure-sysext.service. Apr 17 23:51:00.310417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:51:00.310586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:51:00.314236 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:51:00.314349 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:51:00.317497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:51:00.317653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:51:00.321203 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:51:00.321327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:51:00.327805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:51:00.327934 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:51:00.340054 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:51:00.380786 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:51:01.321541 systemd-resolved[1465]: Clock change detected. Flushing caches. Apr 17 23:51:01.321592 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:51:01.321603 systemd-timesyncd[1532]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 23:51:01.321637 systemd-timesyncd[1532]: Initial clock synchronization to Fri 2026-04-17 23:51:01.321423 UTC. Apr 17 23:51:01.324913 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:51:01.328530 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:51:01.331842 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:51:01.335545 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:51:01.335595 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:51:01.338054 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:51:01.340814 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:51:01.343631 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:51:01.346839 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:51:01.349765 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:51:01.353956 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:51:01.357336 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:51:01.374545 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:51:01.378540 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:51:01.381503 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:51:01.384205 systemd[1]: System is tainted: cgroupsv1 Apr 17 23:51:01.384260 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:51:01.384279 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:51:01.385292 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:51:01.389550 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:51:01.394601 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:51:01.399814 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:51:01.402141 jq[1538]: false Apr 17 23:51:01.402573 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:51:01.404239 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:51:01.409248 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:51:01.415151 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:51:01.418960 extend-filesystems[1540]: Found loop3 Apr 17 23:51:01.418960 extend-filesystems[1540]: Found loop4 Apr 17 23:51:01.429995 extend-filesystems[1540]: Found loop5 Apr 17 23:51:01.429995 extend-filesystems[1540]: Found sr0 Apr 17 23:51:01.429995 extend-filesystems[1540]: Found vda Apr 17 23:51:01.429995 extend-filesystems[1540]: Found vda1 Apr 17 23:51:01.429995 extend-filesystems[1540]: Found vda2 Apr 17 23:51:01.429995 extend-filesystems[1540]: Found vda3 Apr 17 23:51:01.429995 extend-filesystems[1540]: Found usr Apr 17 23:51:01.429995 extend-filesystems[1540]: Found vda4 Apr 17 23:51:01.429995 extend-filesystems[1540]: Found vda6 Apr 17 23:51:01.429995 extend-filesystems[1540]: Found vda7 Apr 17 23:51:01.429995 extend-filesystems[1540]: Found vda9 Apr 17 23:51:01.429995 extend-filesystems[1540]: Checking size of /dev/vda9 Apr 17 23:51:01.470569 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 23:51:01.470596 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1249) Apr 17 23:51:01.420980 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:51:01.470671 extend-filesystems[1540]: Resized partition /dev/vda9 Apr 17 23:51:01.433247 dbus-daemon[1537]: [system] SELinux support is enabled Apr 17 23:51:01.428333 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:51:01.473822 extend-filesystems[1559]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:51:01.457224 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:51:01.458341 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:51:01.471831 systemd-networkd[1255]: eth0: Gained IPv6LL Apr 17 23:51:01.479916 update_engine[1562]: I20260417 23:51:01.479828 1562 main.cc:92] Flatcar Update Engine starting Apr 17 23:51:01.481325 update_engine[1562]: I20260417 23:51:01.480991 1562 update_check_scheduler.cc:74] Next update check in 10m0s Apr 17 23:51:01.485595 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:51:01.490110 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:51:01.493509 jq[1566]: true Apr 17 23:51:01.495614 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 23:51:01.497768 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:51:01.510569 systemd-logind[1553]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:51:01.510609 systemd-logind[1553]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:51:01.512378 extend-filesystems[1559]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 23:51:01.512378 extend-filesystems[1559]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 23:51:01.512378 extend-filesystems[1559]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 23:51:01.511530 systemd-logind[1553]: New seat seat0. Apr 17 23:51:01.526626 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Apr 17 23:51:01.512030 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:51:01.512248 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:51:01.512424 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:51:01.512904 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:51:01.523555 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:51:01.529729 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:51:01.529901 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:51:01.540739 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:51:01.540987 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:51:01.553542 jq[1574]: true Apr 17 23:51:01.554331 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:51:01.559647 dbus-daemon[1537]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:51:01.562779 tar[1573]: linux-amd64/LICENSE Apr 17 23:51:01.562779 tar[1573]: linux-amd64/helm Apr 17 23:51:01.565869 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:51:01.570341 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:51:01.581412 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 23:51:01.586704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:01.601118 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:51:01.608153 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:51:01.608294 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:51:01.613247 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:51:01.613377 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:51:01.617710 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:51:01.619395 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:51:01.625543 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:51:01.627050 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:51:01.642695 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:51:01.657151 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:51:01.663991 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 23:51:01.664224 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 23:51:01.667970 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:51:01.737299 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:51:01.756522 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:51:01.766512 containerd[1575]: time="2026-04-17T23:51:01.764668319Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:51:01.789656 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.789739303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.791741080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.791768266Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.791785167Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.791901557Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.791913464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.791948645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.791957474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.792144444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.792156147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793703 containerd[1575]: time="2026-04-17T23:51:01.792169519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793850 containerd[1575]: time="2026-04-17T23:51:01.792177500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793850 containerd[1575]: time="2026-04-17T23:51:01.792234497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793850 containerd[1575]: time="2026-04-17T23:51:01.792376237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793850 containerd[1575]: time="2026-04-17T23:51:01.792533099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:51:01.793850 containerd[1575]: time="2026-04-17T23:51:01.792543948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:51:01.793850 containerd[1575]: time="2026-04-17T23:51:01.792601933Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:51:01.793850 containerd[1575]: time="2026-04-17T23:51:01.792633576Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:51:01.799619 containerd[1575]: time="2026-04-17T23:51:01.799230362Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:51:01.799619 containerd[1575]: time="2026-04-17T23:51:01.799274590Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:51:01.799619 containerd[1575]: time="2026-04-17T23:51:01.799289255Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:51:01.799619 containerd[1575]: time="2026-04-17T23:51:01.799400772Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:51:01.799619 containerd[1575]: time="2026-04-17T23:51:01.799415565Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:51:01.799723 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:51:01.802873 containerd[1575]: time="2026-04-17T23:51:01.802857017Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:51:01.803259 containerd[1575]: time="2026-04-17T23:51:01.803242585Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:51:01.803383 containerd[1575]: time="2026-04-17T23:51:01.803372145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:51:01.803419 containerd[1575]: time="2026-04-17T23:51:01.803413329Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:51:01.803529 containerd[1575]: time="2026-04-17T23:51:01.803520322Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:51:01.803574 containerd[1575]: time="2026-04-17T23:51:01.803567353Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:51:01.803606 containerd[1575]: time="2026-04-17T23:51:01.803600607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:51:01.803634 containerd[1575]: time="2026-04-17T23:51:01.803628401Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:51:01.803661 containerd[1575]: time="2026-04-17T23:51:01.803655688Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:51:01.803690 containerd[1575]: time="2026-04-17T23:51:01.803684226Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:51:01.803719 containerd[1575]: time="2026-04-17T23:51:01.803712874Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:51:01.803752 containerd[1575]: time="2026-04-17T23:51:01.803745689Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:51:01.803785 containerd[1575]: time="2026-04-17T23:51:01.803778324Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:51:01.803827 containerd[1575]: time="2026-04-17T23:51:01.803816803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.804012 containerd[1575]: time="2026-04-17T23:51:01.804001551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.804056 containerd[1575]: time="2026-04-17T23:51:01.804049871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.804136 containerd[1575]: time="2026-04-17T23:51:01.804128120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.804166 containerd[1575]: time="2026-04-17T23:51:01.804160562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.804194 containerd[1575]: time="2026-04-17T23:51:01.804188936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.804222 containerd[1575]: time="2026-04-17T23:51:01.804216675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.804250 containerd[1575]: time="2026-04-17T23:51:01.804243425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.804283 containerd[1575]: time="2026-04-17T23:51:01.804276552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804308074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804318129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804327962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804337600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804348658Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804365195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804374560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804382060Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804420296Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804504842Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804514904Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804523931Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:51:01.805382 containerd[1575]: time="2026-04-17T23:51:01.804531588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.805679 containerd[1575]: time="2026-04-17T23:51:01.804545510Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:51:01.805679 containerd[1575]: time="2026-04-17T23:51:01.804553341Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:51:01.805679 containerd[1575]: time="2026-04-17T23:51:01.804560401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:51:01.805720 containerd[1575]: time="2026-04-17T23:51:01.804759621Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:51:01.805720 containerd[1575]: time="2026-04-17T23:51:01.804803720Z" level=info msg="Connect containerd service" Apr 17 23:51:01.805720 containerd[1575]: time="2026-04-17T23:51:01.804833774Z" level=info msg="using legacy CRI server" Apr 17 23:51:01.805720 containerd[1575]: time="2026-04-17T23:51:01.804838506Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:51:01.805720 containerd[1575]: time="2026-04-17T23:51:01.805006683Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:51:01.806180 containerd[1575]: time="2026-04-17T23:51:01.806158307Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:51:01.807023 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:51:01.807187 containerd[1575]: time="2026-04-17T23:51:01.806538404Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:51:01.807187 containerd[1575]: time="2026-04-17T23:51:01.806570131Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:51:01.807187 containerd[1575]: time="2026-04-17T23:51:01.806600062Z" level=info msg="Start subscribing containerd event" Apr 17 23:51:01.807187 containerd[1575]: time="2026-04-17T23:51:01.806625584Z" level=info msg="Start recovering state" Apr 17 23:51:01.807187 containerd[1575]: time="2026-04-17T23:51:01.806667186Z" level=info msg="Start event monitor" Apr 17 23:51:01.807187 containerd[1575]: time="2026-04-17T23:51:01.806677943Z" level=info msg="Start snapshots syncer" Apr 17 23:51:01.807187 containerd[1575]: time="2026-04-17T23:51:01.806684574Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:51:01.807187 containerd[1575]: time="2026-04-17T23:51:01.806689964Z" level=info msg="Start streaming server" Apr 17 23:51:01.807187 containerd[1575]: time="2026-04-17T23:51:01.806721143Z" level=info msg="containerd successfully booted in 0.042943s" Apr 17 23:51:01.810652 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:51:01.810818 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:51:01.823833 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:51:01.835781 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:51:01.846751 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:51:01.850800 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:51:01.854031 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:51:02.053786 tar[1573]: linux-amd64/README.md Apr 17 23:51:02.068366 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:51:02.432331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:02.436003 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:51:02.437128 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:51:02.439360 systemd[1]: Startup finished in 7.244s (kernel) + 4.023s (userspace) = 11.267s. Apr 17 23:51:02.917560 kubelet[1674]: E0417 23:51:02.917243 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:51:02.919287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:51:02.919483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:51:06.133040 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:51:06.147785 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:39596.service - OpenSSH per-connection server daemon (10.0.0.1:39596). Apr 17 23:51:06.202526 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 39596 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:51:06.204578 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:51:06.216020 systemd-logind[1553]: New session 1 of user core. Apr 17 23:51:06.216753 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:51:06.228917 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:51:06.242654 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:51:06.245371 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:51:06.267960 (systemd)[1693]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:51:06.436420 systemd[1693]: Queued start job for default target default.target. Apr 17 23:51:06.436821 systemd[1693]: Created slice app.slice - User Application Slice. Apr 17 23:51:06.436860 systemd[1693]: Reached target paths.target - Paths. Apr 17 23:51:06.436868 systemd[1693]: Reached target timers.target - Timers. Apr 17 23:51:06.447653 systemd[1693]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:51:06.454773 systemd[1693]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:51:06.454851 systemd[1693]: Reached target sockets.target - Sockets. Apr 17 23:51:06.454863 systemd[1693]: Reached target basic.target - Basic System. Apr 17 23:51:06.454895 systemd[1693]: Reached target default.target - Main User Target. Apr 17 23:51:06.454913 systemd[1693]: Startup finished in 178ms. Apr 17 23:51:06.455315 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:51:06.456609 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:51:06.524857 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:39612.service - OpenSSH per-connection server daemon (10.0.0.1:39612). Apr 17 23:51:06.554265 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 39612 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:51:06.555515 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:51:06.560229 systemd-logind[1553]: New session 2 of user core. Apr 17 23:51:06.570739 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:51:06.625929 sshd[1705]: pam_unix(sshd:session): session closed for user core Apr 17 23:51:06.644015 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:39618.service - OpenSSH per-connection server daemon (10.0.0.1:39618). Apr 17 23:51:06.644379 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:39612.service: Deactivated successfully. Apr 17 23:51:06.646213 systemd-logind[1553]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:51:06.646618 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:51:06.647917 systemd-logind[1553]: Removed session 2. Apr 17 23:51:06.671764 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 39618 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:51:06.672989 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:51:06.677845 systemd-logind[1553]: New session 3 of user core. Apr 17 23:51:06.687230 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:51:06.737710 sshd[1710]: pam_unix(sshd:session): session closed for user core Apr 17 23:51:06.746818 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:39634.service - OpenSSH per-connection server daemon (10.0.0.1:39634). Apr 17 23:51:06.747308 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:39618.service: Deactivated successfully. Apr 17 23:51:06.749678 systemd-logind[1553]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:51:06.750066 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:51:06.751755 systemd-logind[1553]: Removed session 3. Apr 17 23:51:06.774977 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 39634 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:51:06.777342 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:51:06.783534 systemd-logind[1553]: New session 4 of user core. Apr 17 23:51:06.790178 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:51:06.849548 sshd[1718]: pam_unix(sshd:session): session closed for user core Apr 17 23:51:06.868888 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:39638.service - OpenSSH per-connection server daemon (10.0.0.1:39638). Apr 17 23:51:06.869252 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:39634.service: Deactivated successfully. Apr 17 23:51:06.871550 systemd-logind[1553]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:51:06.871990 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:51:06.874030 systemd-logind[1553]: Removed session 4. Apr 17 23:51:06.905136 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 39638 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:51:06.907626 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:51:06.911948 systemd-logind[1553]: New session 5 of user core. Apr 17 23:51:06.921717 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:51:06.984356 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:51:06.984638 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:51:07.003723 sudo[1733]: pam_unix(sudo:session): session closed for user root Apr 17 23:51:07.006536 sshd[1726]: pam_unix(sshd:session): session closed for user core Apr 17 23:51:07.020818 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:39654.service - OpenSSH per-connection server daemon (10.0.0.1:39654). Apr 17 23:51:07.021227 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:39638.service: Deactivated successfully. Apr 17 23:51:07.023209 systemd-logind[1553]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:51:07.023528 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:51:07.025161 systemd-logind[1553]: Removed session 5. Apr 17 23:51:07.049932 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 39654 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:51:07.051407 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:51:07.056623 systemd-logind[1553]: New session 6 of user core. Apr 17 23:51:07.062752 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:51:07.116770 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:51:07.116995 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:51:07.121307 sudo[1743]: pam_unix(sudo:session): session closed for user root Apr 17 23:51:07.125989 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:51:07.126241 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:51:07.148013 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:51:07.150304 auditctl[1746]: No rules Apr 17 23:51:07.150731 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:51:07.151047 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:51:07.153873 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:51:07.185913 augenrules[1765]: No rules Apr 17 23:51:07.187063 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:51:07.188428 sudo[1742]: pam_unix(sudo:session): session closed for user root Apr 17 23:51:07.190079 sshd[1735]: pam_unix(sshd:session): session closed for user core Apr 17 23:51:07.198886 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:39670.service - OpenSSH per-connection server daemon (10.0.0.1:39670). Apr 17 23:51:07.199210 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:39654.service: Deactivated successfully. Apr 17 23:51:07.200761 systemd-logind[1553]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:51:07.201167 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:51:07.202412 systemd-logind[1553]: Removed session 6. Apr 17 23:51:07.227506 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 39670 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:51:07.228852 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:51:07.236062 systemd-logind[1553]: New session 7 of user core. Apr 17 23:51:07.247949 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:51:07.303766 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:51:07.304000 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:51:07.566734 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:51:07.566855 (dockerd)[1797]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:51:07.848861 dockerd[1797]: time="2026-04-17T23:51:07.847656910Z" level=info msg="Starting up" Apr 17 23:51:08.110362 dockerd[1797]: time="2026-04-17T23:51:08.110158598Z" level=info msg="Loading containers: start." Apr 17 23:51:08.271578 kernel: Initializing XFRM netlink socket Apr 17 23:51:08.372093 systemd-networkd[1255]: docker0: Link UP Apr 17 23:51:08.400169 dockerd[1797]: time="2026-04-17T23:51:08.399998121Z" level=info msg="Loading containers: done." Apr 17 23:51:08.416366 dockerd[1797]: time="2026-04-17T23:51:08.416284646Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:51:08.416891 dockerd[1797]: time="2026-04-17T23:51:08.416487705Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:51:08.416891 dockerd[1797]: time="2026-04-17T23:51:08.416563280Z" level=info msg="Daemon has completed initialization" Apr 17 23:51:08.461387 dockerd[1797]: time="2026-04-17T23:51:08.461275207Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:51:08.461590 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:51:08.930951 containerd[1575]: time="2026-04-17T23:51:08.930885228Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:51:09.451534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2706255574.mount: Deactivated successfully. Apr 17 23:51:10.357413 containerd[1575]: time="2026-04-17T23:51:10.357283059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:10.358192 containerd[1575]: time="2026-04-17T23:51:10.358088139Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 17 23:51:10.359835 containerd[1575]: time="2026-04-17T23:51:10.359765932Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:10.363098 containerd[1575]: time="2026-04-17T23:51:10.362740292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:10.363595 containerd[1575]: time="2026-04-17T23:51:10.363548572Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.432632218s" Apr 17 23:51:10.363650 containerd[1575]: time="2026-04-17T23:51:10.363618828Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 23:51:10.364257 containerd[1575]: time="2026-04-17T23:51:10.364223983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:51:11.319388 containerd[1575]: time="2026-04-17T23:51:11.319277568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:11.320046 containerd[1575]: time="2026-04-17T23:51:11.319953582Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 17 23:51:11.320980 containerd[1575]: time="2026-04-17T23:51:11.320892459Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:11.323672 containerd[1575]: time="2026-04-17T23:51:11.323615635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:11.324510 containerd[1575]: time="2026-04-17T23:51:11.324412867Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 960.163268ms" Apr 17 23:51:11.324574 containerd[1575]: time="2026-04-17T23:51:11.324522985Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 23:51:11.325373 containerd[1575]: time="2026-04-17T23:51:11.325337167Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:51:12.090531 containerd[1575]: time="2026-04-17T23:51:12.090278431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:12.091644 containerd[1575]: time="2026-04-17T23:51:12.091586874Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 17 23:51:12.092785 containerd[1575]: time="2026-04-17T23:51:12.092738317Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:12.095203 containerd[1575]: time="2026-04-17T23:51:12.095161813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:12.096263 containerd[1575]: time="2026-04-17T23:51:12.096225091Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 770.844328ms" Apr 17 23:51:12.096299 containerd[1575]: time="2026-04-17T23:51:12.096268479Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 23:51:12.096995 containerd[1575]: time="2026-04-17T23:51:12.096962363Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:51:13.017717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648753084.mount: Deactivated successfully. Apr 17 23:51:13.018631 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:51:13.030669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:13.138938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:13.142682 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:51:13.189078 kubelet[2033]: E0417 23:51:13.188930 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:51:13.192332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:51:13.192537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:51:13.437717 containerd[1575]: time="2026-04-17T23:51:13.437361641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:13.438728 containerd[1575]: time="2026-04-17T23:51:13.438679705Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 17 23:51:13.439922 containerd[1575]: time="2026-04-17T23:51:13.439869627Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:13.441862 containerd[1575]: time="2026-04-17T23:51:13.441814933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:13.442189 containerd[1575]: time="2026-04-17T23:51:13.442157861Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.345130635s" Apr 17 23:51:13.442221 containerd[1575]: time="2026-04-17T23:51:13.442193821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 23:51:13.442958 containerd[1575]: time="2026-04-17T23:51:13.442815780Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:51:13.893938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208655269.mount: Deactivated successfully. Apr 17 23:51:14.603201 containerd[1575]: time="2026-04-17T23:51:14.603021287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:14.604259 containerd[1575]: time="2026-04-17T23:51:14.604189845Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 17 23:51:14.605635 containerd[1575]: time="2026-04-17T23:51:14.605580195Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:14.610295 containerd[1575]: time="2026-04-17T23:51:14.610221623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:14.611940 containerd[1575]: time="2026-04-17T23:51:14.611881634Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.169041013s" Apr 17 23:51:14.611983 containerd[1575]: time="2026-04-17T23:51:14.611945405Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 23:51:14.612681 containerd[1575]: time="2026-04-17T23:51:14.612633682Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:51:14.963269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176783854.mount: Deactivated successfully. Apr 17 23:51:14.971050 containerd[1575]: time="2026-04-17T23:51:14.970946563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:14.971827 containerd[1575]: time="2026-04-17T23:51:14.971775002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 17 23:51:14.972700 containerd[1575]: time="2026-04-17T23:51:14.972651357Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:14.977734 containerd[1575]: time="2026-04-17T23:51:14.975834793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:14.977734 containerd[1575]: time="2026-04-17T23:51:14.977427361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 364.616838ms" Apr 17 23:51:14.977734 containerd[1575]: time="2026-04-17T23:51:14.977561302Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 23:51:14.978952 containerd[1575]: time="2026-04-17T23:51:14.978916140Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:51:15.372850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1863483661.mount: Deactivated successfully. Apr 17 23:51:16.105644 containerd[1575]: time="2026-04-17T23:51:16.105559103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:16.106531 containerd[1575]: time="2026-04-17T23:51:16.106479129Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 17 23:51:16.108587 containerd[1575]: time="2026-04-17T23:51:16.108535101Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:16.111199 containerd[1575]: time="2026-04-17T23:51:16.111110636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:16.112009 containerd[1575]: time="2026-04-17T23:51:16.111955330Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.132996418s" Apr 17 23:51:16.112009 containerd[1575]: time="2026-04-17T23:51:16.112003580Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 23:51:18.911699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:18.924759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:18.951540 systemd[1]: Reloading requested from client PID 2193 ('systemctl') (unit session-7.scope)... Apr 17 23:51:18.951577 systemd[1]: Reloading... Apr 17 23:51:19.019542 zram_generator::config[2232]: No configuration found. Apr 17 23:51:19.116746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:51:19.185648 systemd[1]: Reloading finished in 233 ms. Apr 17 23:51:19.230963 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:51:19.231039 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:51:19.231382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:19.238842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:19.352530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:19.357257 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:51:19.433215 kernel: hrtimer: interrupt took 12639141 ns Apr 17 23:51:19.449572 kubelet[2292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:51:19.449572 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:51:19.449572 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:51:19.449572 kubelet[2292]: I0417 23:51:19.449276 2292 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:51:20.297673 kubelet[2292]: I0417 23:51:20.297608 2292 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:51:20.297673 kubelet[2292]: I0417 23:51:20.297652 2292 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:51:20.297858 kubelet[2292]: I0417 23:51:20.297829 2292 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:51:20.319841 kubelet[2292]: I0417 23:51:20.319799 2292 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:51:20.320341 kubelet[2292]: E0417 23:51:20.320199 2292 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:51:20.331316 kubelet[2292]: E0417 23:51:20.331158 2292 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:51:20.331316 kubelet[2292]: I0417 23:51:20.331281 2292 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:51:20.335318 kubelet[2292]: I0417 23:51:20.335252 2292 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:51:20.335693 kubelet[2292]: I0417 23:51:20.335622 2292 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:51:20.335831 kubelet[2292]: I0417 23:51:20.335667 2292 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:51:20.335831 kubelet[2292]: I0417 23:51:20.335825 2292 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:51:20.335831 kubelet[2292]: I0417 23:51:20.335832 2292 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:51:20.335968 kubelet[2292]: I0417 23:51:20.335933 2292 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:51:20.339391 kubelet[2292]: I0417 23:51:20.339273 2292 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:51:20.339391 kubelet[2292]: I0417 23:51:20.339363 2292 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:51:20.339391 kubelet[2292]: I0417 23:51:20.339382 2292 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:51:20.339391 kubelet[2292]: I0417 23:51:20.339408 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:51:20.342673 kubelet[2292]: E0417 23:51:20.342625 2292 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:51:20.343500 kubelet[2292]: I0417 23:51:20.342985 2292 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:51:20.343500 kubelet[2292]: E0417 23:51:20.343217 2292 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:51:20.343500 kubelet[2292]: I0417 23:51:20.343498 2292 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:51:20.344568 kubelet[2292]: W0417 23:51:20.344556 2292 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:51:20.348653 kubelet[2292]: I0417 23:51:20.348594 2292 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:51:20.348713 kubelet[2292]: I0417 23:51:20.348665 2292 server.go:1289] "Started kubelet" Apr 17 23:51:20.349648 kubelet[2292]: I0417 23:51:20.349635 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:51:20.351507 kubelet[2292]: I0417 23:51:20.350521 2292 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:51:20.352131 kubelet[2292]: I0417 23:51:20.349583 2292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:51:20.352544 kubelet[2292]: I0417 23:51:20.352525 2292 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:51:20.352638 kubelet[2292]: I0417 23:51:20.349657 2292 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:51:20.353249 kubelet[2292]: E0417 23:51:20.353235 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:51:20.353336 kubelet[2292]: I0417 23:51:20.353329 2292 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:51:20.353611 kubelet[2292]: I0417 23:51:20.353576 2292 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:51:20.353654 kubelet[2292]: I0417 23:51:20.353638 2292 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:51:20.353957 kubelet[2292]: E0417 23:51:20.353936 2292 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:51:20.354025 kubelet[2292]: E0417 23:51:20.352753 2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a749f1ac5c5639 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:51:20.348628537 +0000 UTC m=+0.985981277,LastTimestamp:2026-04-17 23:51:20.348628537 +0000 UTC m=+0.985981277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:51:20.354553 kubelet[2292]: I0417 23:51:20.354484 2292 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:51:20.357330 kubelet[2292]: E0417 23:51:20.357312 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Apr 17 23:51:20.359554 kubelet[2292]: I0417 23:51:20.359543 2292 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:51:20.359738 kubelet[2292]: I0417 23:51:20.359729 2292 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:51:20.359818 kubelet[2292]: I0417 23:51:20.359809 2292 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:51:20.361588 kubelet[2292]: E0417 23:51:20.359615 2292 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:51:20.380409 kubelet[2292]: I0417 23:51:20.380355 2292 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:51:20.382261 kubelet[2292]: I0417 23:51:20.382103 2292 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:51:20.383807 kubelet[2292]: I0417 23:51:20.383761 2292 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:51:20.383883 kubelet[2292]: I0417 23:51:20.383841 2292 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:51:20.383883 kubelet[2292]: I0417 23:51:20.383848 2292 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:51:20.383883 kubelet[2292]: E0417 23:51:20.383877 2292 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:51:20.383972 kubelet[2292]: I0417 23:51:20.383354 2292 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:51:20.383972 kubelet[2292]: I0417 23:51:20.383940 2292 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:51:20.383972 kubelet[2292]: I0417 23:51:20.383952 2292 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:51:20.387056 kubelet[2292]: E0417 23:51:20.386980 2292 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:51:20.389533 kubelet[2292]: I0417 23:51:20.389505 2292 policy_none.go:49] "None policy: Start" Apr 17 23:51:20.389615 kubelet[2292]: I0417 23:51:20.389610 2292 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:51:20.389645 kubelet[2292]: I0417 23:51:20.389641 2292 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:51:20.396056 kubelet[2292]: E0417 23:51:20.395543 2292 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:51:20.396056 kubelet[2292]: I0417 23:51:20.395666 2292 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:51:20.396056 kubelet[2292]: I0417 23:51:20.395673 2292 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:51:20.396056 kubelet[2292]: I0417 23:51:20.395953 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:51:20.397498 kubelet[2292]: E0417 23:51:20.397426 2292 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:51:20.397572 kubelet[2292]: E0417 23:51:20.397563 2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 23:51:20.494538 kubelet[2292]: E0417 23:51:20.494334 2292 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:51:20.497070 kubelet[2292]: E0417 23:51:20.496850 2292 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:51:20.497489 kubelet[2292]: I0417 23:51:20.497369 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:51:20.497715 kubelet[2292]: E0417 23:51:20.497680 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Apr 17 23:51:20.499618 kubelet[2292]: E0417 23:51:20.499578 2292 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:51:20.559354 kubelet[2292]: E0417 23:51:20.559025 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Apr 17 23:51:20.655543 kubelet[2292]: I0417 23:51:20.655364 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:20.655543 kubelet[2292]: I0417 23:51:20.655529 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87c5bb1ce9ba081b5df8d5f8a874ab83-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"87c5bb1ce9ba081b5df8d5f8a874ab83\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:20.655543 kubelet[2292]: I0417 23:51:20.655550 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:20.655875 kubelet[2292]: I0417 23:51:20.655583 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:20.655875 kubelet[2292]: I0417 23:51:20.655600 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:20.655875 kubelet[2292]: I0417 23:51:20.655614 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:20.655875 kubelet[2292]: I0417 23:51:20.655628 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87c5bb1ce9ba081b5df8d5f8a874ab83-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"87c5bb1ce9ba081b5df8d5f8a874ab83\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:20.655875 kubelet[2292]: I0417 23:51:20.655641 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87c5bb1ce9ba081b5df8d5f8a874ab83-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"87c5bb1ce9ba081b5df8d5f8a874ab83\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:20.655964 kubelet[2292]: I0417 23:51:20.655656 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:20.701308 kubelet[2292]: I0417 23:51:20.701097 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:51:20.701620 kubelet[2292]: E0417 23:51:20.701526 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Apr 17 23:51:20.796251 kubelet[2292]: E0417 23:51:20.795849 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:20.797681 kubelet[2292]: E0417 23:51:20.797103 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:20.797744 containerd[1575]: time="2026-04-17T23:51:20.797371016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:87c5bb1ce9ba081b5df8d5f8a874ab83,Namespace:kube-system,Attempt:0,}" Apr 17 23:51:20.797744 containerd[1575]: time="2026-04-17T23:51:20.797621150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 17 23:51:20.800543 kubelet[2292]: E0417 23:51:20.800496 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:20.801141 containerd[1575]: time="2026-04-17T23:51:20.801061841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 17 23:51:20.960085 kubelet[2292]: E0417 23:51:20.959777 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Apr 17 23:51:21.104672 kubelet[2292]: I0417 23:51:21.104569 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:51:21.105072 kubelet[2292]: E0417 23:51:21.105025 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Apr 17 23:51:21.152698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753763320.mount: Deactivated successfully. Apr 17 23:51:21.161273 containerd[1575]: time="2026-04-17T23:51:21.161149093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:51:21.163610 containerd[1575]: time="2026-04-17T23:51:21.163527172Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:51:21.167577 containerd[1575]: time="2026-04-17T23:51:21.167524276Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 17 23:51:21.168684 containerd[1575]: time="2026-04-17T23:51:21.168550191Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:51:21.169709 containerd[1575]: time="2026-04-17T23:51:21.169552875Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:51:21.171044 containerd[1575]: time="2026-04-17T23:51:21.170956811Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:51:21.171944 containerd[1575]: time="2026-04-17T23:51:21.171804929Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:51:21.174079 containerd[1575]: time="2026-04-17T23:51:21.173903175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:51:21.174333 containerd[1575]: time="2026-04-17T23:51:21.174313590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 373.168022ms" Apr 17 23:51:21.178803 containerd[1575]: time="2026-04-17T23:51:21.178723693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 381.04329ms" Apr 17 23:51:21.182348 containerd[1575]: time="2026-04-17T23:51:21.181983973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 384.542679ms" Apr 17 23:51:21.268060 containerd[1575]: time="2026-04-17T23:51:21.267770280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:21.268060 containerd[1575]: time="2026-04-17T23:51:21.267906751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:21.268261 containerd[1575]: time="2026-04-17T23:51:21.267976348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:21.268261 containerd[1575]: time="2026-04-17T23:51:21.268164756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:21.270073 containerd[1575]: time="2026-04-17T23:51:21.269969484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:21.270073 containerd[1575]: time="2026-04-17T23:51:21.270022060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:21.270298 containerd[1575]: time="2026-04-17T23:51:21.270170548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:21.270663 containerd[1575]: time="2026-04-17T23:51:21.270573529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:21.274475 containerd[1575]: time="2026-04-17T23:51:21.273818739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:21.274475 containerd[1575]: time="2026-04-17T23:51:21.273862542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:21.274475 containerd[1575]: time="2026-04-17T23:51:21.273874783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:21.274475 containerd[1575]: time="2026-04-17T23:51:21.273933156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:21.339279 containerd[1575]: time="2026-04-17T23:51:21.339099455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"f105124596fb601ad4383d7def9e711fd38ed0598c7da0a2c83092186df54cc0\"" Apr 17 23:51:21.342524 kubelet[2292]: E0417 23:51:21.342381 2292 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:51:21.342876 kubelet[2292]: E0417 23:51:21.342825 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:21.347328 containerd[1575]: time="2026-04-17T23:51:21.347307973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"dff2521c9a700f8bfc18589254db6d5f9c2f2333971ff180802fd6b80ce69e63\"" Apr 17 23:51:21.347613 containerd[1575]: time="2026-04-17T23:51:21.347598375Z" level=info msg="CreateContainer within sandbox \"f105124596fb601ad4383d7def9e711fd38ed0598c7da0a2c83092186df54cc0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:51:21.347890 kubelet[2292]: E0417 23:51:21.347847 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:21.349432 containerd[1575]: time="2026-04-17T23:51:21.349371515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:87c5bb1ce9ba081b5df8d5f8a874ab83,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd4899ce52607ec310b1a1e0a3130d185d745d5c345c0b98e213ea4585f2cf70\"" Apr 17 23:51:21.350053 kubelet[2292]: E0417 23:51:21.350031 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:21.351607 containerd[1575]: time="2026-04-17T23:51:21.351406155Z" level=info msg="CreateContainer within sandbox \"dff2521c9a700f8bfc18589254db6d5f9c2f2333971ff180802fd6b80ce69e63\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:51:21.369000 containerd[1575]: time="2026-04-17T23:51:21.368884363Z" level=info msg="CreateContainer within sandbox \"fd4899ce52607ec310b1a1e0a3130d185d745d5c345c0b98e213ea4585f2cf70\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:51:21.378868 containerd[1575]: time="2026-04-17T23:51:21.378742298Z" level=info msg="CreateContainer within sandbox \"dff2521c9a700f8bfc18589254db6d5f9c2f2333971ff180802fd6b80ce69e63\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"876149dbc98dc399f1b80dc3ca175a7ad5b943fda2cf7470d2d7037ee5746d41\"" Apr 17 23:51:21.379702 containerd[1575]: time="2026-04-17T23:51:21.379658624Z" level=info msg="StartContainer for \"876149dbc98dc399f1b80dc3ca175a7ad5b943fda2cf7470d2d7037ee5746d41\"" Apr 17 23:51:21.385980 containerd[1575]: time="2026-04-17T23:51:21.385859130Z" level=info msg="CreateContainer within sandbox \"f105124596fb601ad4383d7def9e711fd38ed0598c7da0a2c83092186df54cc0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fd0437255fff0c007b831bbbe12b4da4115d58c27837ba3954f67c5978b3a5bb\"" Apr 17 23:51:21.386727 containerd[1575]: time="2026-04-17T23:51:21.386707251Z" level=info msg="StartContainer for \"fd0437255fff0c007b831bbbe12b4da4115d58c27837ba3954f67c5978b3a5bb\"" Apr 17 23:51:21.400488 containerd[1575]: time="2026-04-17T23:51:21.400321538Z" level=info msg="CreateContainer within sandbox \"fd4899ce52607ec310b1a1e0a3130d185d745d5c345c0b98e213ea4585f2cf70\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e993c38c57105e53123cd4458f7116164d6be1a856bba5c855b3009812fce14a\"" Apr 17 23:51:21.400975 containerd[1575]: time="2026-04-17T23:51:21.400783808Z" level=info msg="StartContainer for \"e993c38c57105e53123cd4458f7116164d6be1a856bba5c855b3009812fce14a\"" Apr 17 23:51:21.456885 containerd[1575]: time="2026-04-17T23:51:21.455947809Z" level=info msg="StartContainer for \"876149dbc98dc399f1b80dc3ca175a7ad5b943fda2cf7470d2d7037ee5746d41\" returns successfully" Apr 17 23:51:21.470121 containerd[1575]: time="2026-04-17T23:51:21.470049682Z" level=info msg="StartContainer for \"fd0437255fff0c007b831bbbe12b4da4115d58c27837ba3954f67c5978b3a5bb\" returns successfully" Apr 17 23:51:21.492544 containerd[1575]: time="2026-04-17T23:51:21.492477801Z" level=info msg="StartContainer for \"e993c38c57105e53123cd4458f7116164d6be1a856bba5c855b3009812fce14a\" returns successfully" Apr 17 23:51:21.909513 kubelet[2292]: I0417 23:51:21.907295 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:51:22.270740 kubelet[2292]: E0417 23:51:22.270635 2292 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 23:51:22.343033 kubelet[2292]: I0417 23:51:22.342972 2292 apiserver.go:52] "Watching apiserver" Apr 17 23:51:22.354074 kubelet[2292]: I0417 23:51:22.353942 2292 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:51:22.365150 kubelet[2292]: I0417 23:51:22.364847 2292 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:51:22.365150 kubelet[2292]: E0417 23:51:22.364902 2292 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 23:51:22.402085 kubelet[2292]: I0417 23:51:22.402013 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:22.410512 kubelet[2292]: I0417 23:51:22.408709 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:22.417031 kubelet[2292]: I0417 23:51:22.416882 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:22.417364 kubelet[2292]: E0417 23:51:22.417302 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:22.417543 kubelet[2292]: E0417 23:51:22.417510 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:22.417657 kubelet[2292]: E0417 23:51:22.417615 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:22.417737 kubelet[2292]: E0417 23:51:22.417704 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:22.421544 kubelet[2292]: E0417 23:51:22.421355 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:22.423872 kubelet[2292]: E0417 23:51:22.423857 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:22.455001 kubelet[2292]: I0417 23:51:22.454922 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:22.457244 kubelet[2292]: E0417 23:51:22.457069 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:22.457244 kubelet[2292]: I0417 23:51:22.457090 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:22.460661 kubelet[2292]: E0417 23:51:22.460602 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:22.460661 kubelet[2292]: I0417 23:51:22.460648 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:22.463220 kubelet[2292]: E0417 23:51:22.462837 2292 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:23.419971 kubelet[2292]: I0417 23:51:23.419900 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:23.420520 kubelet[2292]: I0417 23:51:23.420003 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:23.420520 kubelet[2292]: I0417 23:51:23.420331 2292 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:23.431665 kubelet[2292]: E0417 23:51:23.431601 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:23.434515 kubelet[2292]: E0417 23:51:23.434311 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:23.434515 kubelet[2292]: E0417 23:51:23.434292 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:24.424315 kubelet[2292]: E0417 23:51:24.424282 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:24.426109 kubelet[2292]: E0417 23:51:24.424383 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:24.426109 kubelet[2292]: E0417 23:51:24.424552 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:24.818289 systemd[1]: Reloading requested from client PID 2583 ('systemctl') (unit session-7.scope)... Apr 17 23:51:24.818323 systemd[1]: Reloading... Apr 17 23:51:24.877500 zram_generator::config[2622]: No configuration found. Apr 17 23:51:24.984270 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:51:25.044556 systemd[1]: Reloading finished in 225 ms. Apr 17 23:51:25.078272 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:25.091913 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:51:25.092542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:25.101703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:51:25.217946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:51:25.222514 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:51:25.264715 kubelet[2677]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:51:25.264715 kubelet[2677]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:51:25.264715 kubelet[2677]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:51:25.265812 kubelet[2677]: I0417 23:51:25.264711 2677 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:51:25.271393 kubelet[2677]: I0417 23:51:25.271341 2677 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:51:25.271393 kubelet[2677]: I0417 23:51:25.271360 2677 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:51:25.271686 kubelet[2677]: I0417 23:51:25.271593 2677 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:51:25.272705 kubelet[2677]: I0417 23:51:25.272623 2677 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:51:25.277417 kubelet[2677]: I0417 23:51:25.277333 2677 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:51:25.281257 kubelet[2677]: E0417 23:51:25.281186 2677 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:51:25.281257 kubelet[2677]: I0417 23:51:25.281256 2677 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:51:25.285008 kubelet[2677]: I0417 23:51:25.284965 2677 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:51:25.285540 kubelet[2677]: I0417 23:51:25.285423 2677 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:51:25.285639 kubelet[2677]: I0417 23:51:25.285495 2677 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:51:25.285639 kubelet[2677]: I0417 23:51:25.285605 2677 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:51:25.285639 kubelet[2677]: I0417 23:51:25.285611 2677 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:51:25.285768 kubelet[2677]: I0417 23:51:25.285648 2677 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:51:25.285842 kubelet[2677]: I0417 23:51:25.285801 2677 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:51:25.285842 kubelet[2677]: I0417 23:51:25.285811 2677 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:51:25.285928 kubelet[2677]: I0417 23:51:25.285883 2677 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:51:25.285928 kubelet[2677]: I0417 23:51:25.285897 2677 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:51:25.289525 kubelet[2677]: I0417 23:51:25.289378 2677 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:51:25.292560 kubelet[2677]: I0417 23:51:25.290604 2677 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:51:25.296113 kubelet[2677]: I0417 23:51:25.295937 2677 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:51:25.296348 kubelet[2677]: I0417 23:51:25.296338 2677 server.go:1289] "Started kubelet" Apr 17 23:51:25.299414 kubelet[2677]: I0417 23:51:25.299396 2677 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:51:25.307924 kubelet[2677]: I0417 23:51:25.307895 2677 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:51:25.308927 kubelet[2677]: I0417 23:51:25.308898 2677 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:51:25.310573 kubelet[2677]: I0417 23:51:25.310074 2677 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:51:25.310573 kubelet[2677]: E0417 23:51:25.310306 2677 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:51:25.310939 kubelet[2677]: E0417 23:51:25.310854 2677 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:51:25.310939 kubelet[2677]: I0417 23:51:25.310927 2677 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:51:25.311040 kubelet[2677]: I0417 23:51:25.311001 2677 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:51:25.311106 kubelet[2677]: I0417 23:51:25.311034 2677 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:51:25.313408 kubelet[2677]: I0417 23:51:25.311811 2677 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:51:25.313408 kubelet[2677]: I0417 23:51:25.312869 2677 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:51:25.313408 kubelet[2677]: I0417 23:51:25.313347 2677 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:51:25.316802 kubelet[2677]: I0417 23:51:25.316426 2677 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:51:25.316802 kubelet[2677]: I0417 23:51:25.316501 2677 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:51:25.317294 kubelet[2677]: I0417 23:51:25.317284 2677 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:51:25.326585 kubelet[2677]: I0417 23:51:25.326563 2677 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:51:25.326900 kubelet[2677]: I0417 23:51:25.326677 2677 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:51:25.326900 kubelet[2677]: I0417 23:51:25.326693 2677 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:51:25.326900 kubelet[2677]: I0417 23:51:25.326700 2677 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:51:25.326900 kubelet[2677]: E0417 23:51:25.326733 2677 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:51:25.374793 kubelet[2677]: I0417 23:51:25.374591 2677 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:51:25.374793 kubelet[2677]: I0417 23:51:25.374624 2677 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:51:25.374793 kubelet[2677]: I0417 23:51:25.374663 2677 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:51:25.374793 kubelet[2677]: I0417 23:51:25.374763 2677 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:51:25.374793 kubelet[2677]: I0417 23:51:25.374770 2677 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:51:25.374793 kubelet[2677]: I0417 23:51:25.374786 2677 policy_none.go:49] "None policy: Start" Apr 17 23:51:25.374793 kubelet[2677]: I0417 23:51:25.374793 2677 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:51:25.374793 kubelet[2677]: I0417 23:51:25.374800 2677 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:51:25.375386 kubelet[2677]: I0417 23:51:25.374867 2677 state_mem.go:75] "Updated machine memory state" Apr 17 23:51:25.377152 kubelet[2677]: E0417 23:51:25.376070 2677 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:51:25.377152 kubelet[2677]: I0417 23:51:25.376197 2677 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:51:25.377152 kubelet[2677]: I0417 23:51:25.376247 2677 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:51:25.377331 kubelet[2677]: I0417 23:51:25.377290 2677 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:51:25.379755 kubelet[2677]: E0417 23:51:25.379622 2677 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:51:25.428868 kubelet[2677]: I0417 23:51:25.428551 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:25.428868 kubelet[2677]: I0417 23:51:25.428654 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:25.428868 kubelet[2677]: I0417 23:51:25.428554 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:25.438316 kubelet[2677]: E0417 23:51:25.438185 2677 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:25.438766 kubelet[2677]: E0417 23:51:25.438677 2677 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:25.439036 kubelet[2677]: E0417 23:51:25.438933 2677 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:25.484820 kubelet[2677]: I0417 23:51:25.484740 2677 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:51:25.495671 kubelet[2677]: I0417 23:51:25.495542 2677 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 17 23:51:25.495835 kubelet[2677]: I0417 23:51:25.495707 2677 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:51:25.512763 kubelet[2677]: I0417 23:51:25.512647 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:25.512763 kubelet[2677]: I0417 23:51:25.512693 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87c5bb1ce9ba081b5df8d5f8a874ab83-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"87c5bb1ce9ba081b5df8d5f8a874ab83\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:25.512763 kubelet[2677]: I0417 23:51:25.512709 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:25.512763 kubelet[2677]: I0417 23:51:25.512721 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:25.512763 kubelet[2677]: I0417 23:51:25.512735 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:25.513103 kubelet[2677]: I0417 23:51:25.512747 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87c5bb1ce9ba081b5df8d5f8a874ab83-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"87c5bb1ce9ba081b5df8d5f8a874ab83\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:25.513103 kubelet[2677]: I0417 23:51:25.512758 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87c5bb1ce9ba081b5df8d5f8a874ab83-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"87c5bb1ce9ba081b5df8d5f8a874ab83\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:25.513103 kubelet[2677]: I0417 23:51:25.512769 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:25.513103 kubelet[2677]: I0417 23:51:25.512779 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:51:25.740029 kubelet[2677]: E0417 23:51:25.739787 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:25.740029 kubelet[2677]: E0417 23:51:25.739820 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:25.740029 kubelet[2677]: E0417 23:51:25.739853 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:26.287534 kubelet[2677]: I0417 23:51:26.287118 2677 apiserver.go:52] "Watching apiserver" Apr 17 23:51:26.312427 kubelet[2677]: I0417 23:51:26.312366 2677 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:51:26.355820 kubelet[2677]: E0417 23:51:26.355280 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:26.355943 kubelet[2677]: I0417 23:51:26.355869 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:26.356172 kubelet[2677]: I0417 23:51:26.356075 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:26.367544 kubelet[2677]: E0417 23:51:26.367359 2677 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:51:26.367862 kubelet[2677]: E0417 23:51:26.367740 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:26.382711 kubelet[2677]: E0417 23:51:26.382363 2677 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:51:26.384575 kubelet[2677]: E0417 23:51:26.383358 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:26.494271 kubelet[2677]: I0417 23:51:26.494092 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.493990053 podStartE2EDuration="3.493990053s" podCreationTimestamp="2026-04-17 23:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:51:26.493947735 +0000 UTC m=+1.267128259" watchObservedRunningTime="2026-04-17 23:51:26.493990053 +0000 UTC m=+1.267170572" Apr 17 23:51:26.516859 kubelet[2677]: I0417 23:51:26.516704 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.5166840280000002 podStartE2EDuration="3.516684028s" podCreationTimestamp="2026-04-17 23:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:51:26.505423376 +0000 UTC m=+1.278603897" watchObservedRunningTime="2026-04-17 23:51:26.516684028 +0000 UTC m=+1.289864557" Apr 17 23:51:27.357832 kubelet[2677]: E0417 23:51:27.357661 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:27.357832 kubelet[2677]: E0417 23:51:27.357752 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:28.631862 kubelet[2677]: E0417 23:51:28.631528 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:29.767353 kubelet[2677]: I0417 23:51:29.767282 2677 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:51:29.767893 containerd[1575]: time="2026-04-17T23:51:29.767718880Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:51:29.768040 kubelet[2677]: I0417 23:51:29.767922 2677 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:51:30.516694 kubelet[2677]: I0417 23:51:30.515390 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.515337365 podStartE2EDuration="7.515337365s" podCreationTimestamp="2026-04-17 23:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:51:26.516962541 +0000 UTC m=+1.290143063" watchObservedRunningTime="2026-04-17 23:51:30.515337365 +0000 UTC m=+5.288517933" Apr 17 23:51:30.567977 kubelet[2677]: I0417 23:51:30.567735 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a88d188-b100-4161-843e-b5bac547079b-kube-proxy\") pod \"kube-proxy-hcz5n\" (UID: \"3a88d188-b100-4161-843e-b5bac547079b\") " pod="kube-system/kube-proxy-hcz5n" Apr 17 23:51:30.567977 kubelet[2677]: I0417 23:51:30.567814 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a88d188-b100-4161-843e-b5bac547079b-lib-modules\") pod \"kube-proxy-hcz5n\" (UID: \"3a88d188-b100-4161-843e-b5bac547079b\") " pod="kube-system/kube-proxy-hcz5n" Apr 17 23:51:30.567977 kubelet[2677]: I0417 23:51:30.567875 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kf58\" (UniqueName: \"kubernetes.io/projected/3a88d188-b100-4161-843e-b5bac547079b-kube-api-access-6kf58\") pod \"kube-proxy-hcz5n\" (UID: \"3a88d188-b100-4161-843e-b5bac547079b\") " pod="kube-system/kube-proxy-hcz5n" Apr 17 23:51:30.567977 kubelet[2677]: I0417 23:51:30.567952 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a88d188-b100-4161-843e-b5bac547079b-xtables-lock\") pod \"kube-proxy-hcz5n\" (UID: \"3a88d188-b100-4161-843e-b5bac547079b\") " pod="kube-system/kube-proxy-hcz5n" Apr 17 23:51:30.825036 kubelet[2677]: E0417 23:51:30.824764 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:30.825970 containerd[1575]: time="2026-04-17T23:51:30.825552922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcz5n,Uid:3a88d188-b100-4161-843e-b5bac547079b,Namespace:kube-system,Attempt:0,}" Apr 17 23:51:30.868886 containerd[1575]: time="2026-04-17T23:51:30.866115256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:30.868886 containerd[1575]: time="2026-04-17T23:51:30.868257167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:30.868886 containerd[1575]: time="2026-04-17T23:51:30.868270150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:30.868886 containerd[1575]: time="2026-04-17T23:51:30.868706659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:30.919932 containerd[1575]: time="2026-04-17T23:51:30.919867956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcz5n,Uid:3a88d188-b100-4161-843e-b5bac547079b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d749ac1e41755a99e60ae6e16720947abdbf7c266ea14dde11af87224197f782\"" Apr 17 23:51:30.920856 kubelet[2677]: E0417 23:51:30.920797 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:30.927210 containerd[1575]: time="2026-04-17T23:51:30.927138181Z" level=info msg="CreateContainer within sandbox \"d749ac1e41755a99e60ae6e16720947abdbf7c266ea14dde11af87224197f782\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:51:30.960015 containerd[1575]: time="2026-04-17T23:51:30.959805283Z" level=info msg="CreateContainer within sandbox \"d749ac1e41755a99e60ae6e16720947abdbf7c266ea14dde11af87224197f782\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d51653d2cb14ac3578dd944837fa4878a87f24bae38c554c387aa2d70607bbc\"" Apr 17 23:51:30.964890 containerd[1575]: time="2026-04-17T23:51:30.964617816Z" level=info msg="StartContainer for \"0d51653d2cb14ac3578dd944837fa4878a87f24bae38c554c387aa2d70607bbc\"" Apr 17 23:51:31.061370 containerd[1575]: time="2026-04-17T23:51:31.061290321Z" level=info msg="StartContainer for \"0d51653d2cb14ac3578dd944837fa4878a87f24bae38c554c387aa2d70607bbc\" returns successfully" Apr 17 23:51:31.072560 kubelet[2677]: I0417 23:51:31.072490 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qvjw\" (UniqueName: \"kubernetes.io/projected/f32e9394-9f84-4f85-b868-0277486ba527-kube-api-access-4qvjw\") pod \"tigera-operator-6bf85f8dd-cnz7s\" (UID: \"f32e9394-9f84-4f85-b868-0277486ba527\") " pod="tigera-operator/tigera-operator-6bf85f8dd-cnz7s" Apr 17 23:51:31.072560 kubelet[2677]: I0417 23:51:31.072533 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f32e9394-9f84-4f85-b868-0277486ba527-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-cnz7s\" (UID: \"f32e9394-9f84-4f85-b868-0277486ba527\") " pod="tigera-operator/tigera-operator-6bf85f8dd-cnz7s" Apr 17 23:51:31.296757 containerd[1575]: time="2026-04-17T23:51:31.296578192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-cnz7s,Uid:f32e9394-9f84-4f85-b868-0277486ba527,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:51:31.332221 containerd[1575]: time="2026-04-17T23:51:31.331855087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:31.332221 containerd[1575]: time="2026-04-17T23:51:31.332127409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:31.332221 containerd[1575]: time="2026-04-17T23:51:31.332150372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:31.333888 containerd[1575]: time="2026-04-17T23:51:31.333677006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:31.368915 kubelet[2677]: E0417 23:51:31.368658 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:31.423499 containerd[1575]: time="2026-04-17T23:51:31.423388987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-cnz7s,Uid:f32e9394-9f84-4f85-b868-0277486ba527,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c5b0aedbf51c0ee8e5d891c6fd222cf63dc7a444ad7586b44fe542dea6367486\"" Apr 17 23:51:31.425394 containerd[1575]: time="2026-04-17T23:51:31.425374031Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:51:32.412847 kubelet[2677]: E0417 23:51:32.412065 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:32.431850 kubelet[2677]: I0417 23:51:32.431750 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hcz5n" podStartSLOduration=2.43173044 podStartE2EDuration="2.43173044s" podCreationTimestamp="2026-04-17 23:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:51:31.398984539 +0000 UTC m=+6.172165061" watchObservedRunningTime="2026-04-17 23:51:32.43173044 +0000 UTC m=+7.204910969" Apr 17 23:51:32.860702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394859277.mount: Deactivated successfully. Apr 17 23:51:33.376141 kubelet[2677]: E0417 23:51:33.375991 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:33.586037 containerd[1575]: time="2026-04-17T23:51:33.585884783Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:33.587059 containerd[1575]: time="2026-04-17T23:51:33.586987564Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:51:33.589608 containerd[1575]: time="2026-04-17T23:51:33.589544278Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:33.593637 containerd[1575]: time="2026-04-17T23:51:33.593575207Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:33.594190 containerd[1575]: time="2026-04-17T23:51:33.594132893Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.168722548s" Apr 17 23:51:33.594303 containerd[1575]: time="2026-04-17T23:51:33.594186132Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:51:33.600561 containerd[1575]: time="2026-04-17T23:51:33.600507156Z" level=info msg="CreateContainer within sandbox \"c5b0aedbf51c0ee8e5d891c6fd222cf63dc7a444ad7586b44fe542dea6367486\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:51:33.616061 containerd[1575]: time="2026-04-17T23:51:33.615959534Z" level=info msg="CreateContainer within sandbox \"c5b0aedbf51c0ee8e5d891c6fd222cf63dc7a444ad7586b44fe542dea6367486\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a1c65e7dcab52af17919fc22038c85695c31cb79fdd5c275222da71483bc0bb1\"" Apr 17 23:51:33.617679 containerd[1575]: time="2026-04-17T23:51:33.616867604Z" level=info msg="StartContainer for \"a1c65e7dcab52af17919fc22038c85695c31cb79fdd5c275222da71483bc0bb1\"" Apr 17 23:51:33.676831 containerd[1575]: time="2026-04-17T23:51:33.676200681Z" level=info msg="StartContainer for \"a1c65e7dcab52af17919fc22038c85695c31cb79fdd5c275222da71483bc0bb1\" returns successfully" Apr 17 23:51:34.380094 kubelet[2677]: E0417 23:51:34.379958 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:35.623052 kubelet[2677]: E0417 23:51:35.622584 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:35.668066 kubelet[2677]: I0417 23:51:35.667960 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-cnz7s" podStartSLOduration=3.497508511 podStartE2EDuration="5.667945872s" podCreationTimestamp="2026-04-17 23:51:30 +0000 UTC" firstStartedPulling="2026-04-17 23:51:31.424867934 +0000 UTC m=+6.198048455" lastFinishedPulling="2026-04-17 23:51:33.595305298 +0000 UTC m=+8.368485816" observedRunningTime="2026-04-17 23:51:34.394816757 +0000 UTC m=+9.167997291" watchObservedRunningTime="2026-04-17 23:51:35.667945872 +0000 UTC m=+10.441126391" Apr 17 23:51:35.999955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1c65e7dcab52af17919fc22038c85695c31cb79fdd5c275222da71483bc0bb1-rootfs.mount: Deactivated successfully. Apr 17 23:51:36.088538 containerd[1575]: time="2026-04-17T23:51:36.085650526Z" level=info msg="shim disconnected" id=a1c65e7dcab52af17919fc22038c85695c31cb79fdd5c275222da71483bc0bb1 namespace=k8s.io Apr 17 23:51:36.088538 containerd[1575]: time="2026-04-17T23:51:36.088093327Z" level=warning msg="cleaning up after shim disconnected" id=a1c65e7dcab52af17919fc22038c85695c31cb79fdd5c275222da71483bc0bb1 namespace=k8s.io Apr 17 23:51:36.088538 containerd[1575]: time="2026-04-17T23:51:36.088112694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:51:36.390574 kubelet[2677]: I0417 23:51:36.388644 2677 scope.go:117] "RemoveContainer" containerID="a1c65e7dcab52af17919fc22038c85695c31cb79fdd5c275222da71483bc0bb1" Apr 17 23:51:36.390574 kubelet[2677]: E0417 23:51:36.390363 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:36.394511 containerd[1575]: time="2026-04-17T23:51:36.393386643Z" level=info msg="CreateContainer within sandbox \"c5b0aedbf51c0ee8e5d891c6fd222cf63dc7a444ad7586b44fe542dea6367486\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 23:51:36.420071 containerd[1575]: time="2026-04-17T23:51:36.419949522Z" level=info msg="CreateContainer within sandbox \"c5b0aedbf51c0ee8e5d891c6fd222cf63dc7a444ad7586b44fe542dea6367486\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f071ab2e2887f06f2dbd81728c503554c8defaabd5fd26acdc67072ae54a9904\"" Apr 17 23:51:36.420782 containerd[1575]: time="2026-04-17T23:51:36.420671850Z" level=info msg="StartContainer for \"f071ab2e2887f06f2dbd81728c503554c8defaabd5fd26acdc67072ae54a9904\"" Apr 17 23:51:36.473573 containerd[1575]: time="2026-04-17T23:51:36.473352115Z" level=info msg="StartContainer for \"f071ab2e2887f06f2dbd81728c503554c8defaabd5fd26acdc67072ae54a9904\" returns successfully" Apr 17 23:51:38.636987 kubelet[2677]: E0417 23:51:38.636922 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:39.002147 sudo[1778]: pam_unix(sudo:session): session closed for user root Apr 17 23:51:39.003908 sshd[1771]: pam_unix(sshd:session): session closed for user core Apr 17 23:51:39.006654 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:39670.service: Deactivated successfully. Apr 17 23:51:39.009667 systemd-logind[1553]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:51:39.009830 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:51:39.011242 systemd-logind[1553]: Removed session 7. Apr 17 23:51:41.919621 kubelet[2677]: I0417 23:51:41.889353 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a51facf8-dca1-4535-bcc8-c53a0c7aed5b-typha-certs\") pod \"calico-typha-796fb65997-jh58k\" (UID: \"a51facf8-dca1-4535-bcc8-c53a0c7aed5b\") " pod="calico-system/calico-typha-796fb65997-jh58k" Apr 17 23:51:41.919621 kubelet[2677]: I0417 23:51:41.889992 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58g57\" (UniqueName: \"kubernetes.io/projected/a51facf8-dca1-4535-bcc8-c53a0c7aed5b-kube-api-access-58g57\") pod \"calico-typha-796fb65997-jh58k\" (UID: \"a51facf8-dca1-4535-bcc8-c53a0c7aed5b\") " pod="calico-system/calico-typha-796fb65997-jh58k" Apr 17 23:51:41.919621 kubelet[2677]: I0417 23:51:41.891058 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a51facf8-dca1-4535-bcc8-c53a0c7aed5b-tigera-ca-bundle\") pod \"calico-typha-796fb65997-jh58k\" (UID: \"a51facf8-dca1-4535-bcc8-c53a0c7aed5b\") " pod="calico-system/calico-typha-796fb65997-jh58k" Apr 17 23:51:42.092417 kubelet[2677]: I0417 23:51:42.092332 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t72hk\" (UniqueName: \"kubernetes.io/projected/657ff01c-246a-408c-b7c8-be54ff6f9f68-kube-api-access-t72hk\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092417 kubelet[2677]: I0417 23:51:42.092420 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-nodeproc\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092810 kubelet[2677]: I0417 23:51:42.092488 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-var-run-calico\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092810 kubelet[2677]: I0417 23:51:42.092502 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-bpffs\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092810 kubelet[2677]: I0417 23:51:42.092514 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-cni-bin-dir\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092810 kubelet[2677]: I0417 23:51:42.092524 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-cni-net-dir\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092810 kubelet[2677]: I0417 23:51:42.092535 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-sys-fs\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092899 kubelet[2677]: I0417 23:51:42.092546 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/657ff01c-246a-408c-b7c8-be54ff6f9f68-tigera-ca-bundle\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092899 kubelet[2677]: I0417 23:51:42.092557 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-policysync\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092899 kubelet[2677]: I0417 23:51:42.092567 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-var-lib-calico\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092899 kubelet[2677]: I0417 23:51:42.092615 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-xtables-lock\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092899 kubelet[2677]: I0417 23:51:42.092635 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-lib-modules\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092981 kubelet[2677]: I0417 23:51:42.092656 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-cni-log-dir\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092981 kubelet[2677]: I0417 23:51:42.092679 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/657ff01c-246a-408c-b7c8-be54ff6f9f68-flexvol-driver-host\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.092981 kubelet[2677]: I0417 23:51:42.092703 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/657ff01c-246a-408c-b7c8-be54ff6f9f68-node-certs\") pod \"calico-node-cvbw7\" (UID: \"657ff01c-246a-408c-b7c8-be54ff6f9f68\") " pod="calico-system/calico-node-cvbw7" Apr 17 23:51:42.105367 kubelet[2677]: E0417 23:51:42.105008 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxr77" podUID="5675cbd7-fdb0-43a9-beed-f1806791852c" Apr 17 23:51:42.195308 kubelet[2677]: I0417 23:51:42.193820 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5675cbd7-fdb0-43a9-beed-f1806791852c-kubelet-dir\") pod \"csi-node-driver-nxr77\" (UID: \"5675cbd7-fdb0-43a9-beed-f1806791852c\") " pod="calico-system/csi-node-driver-nxr77" Apr 17 23:51:42.195308 kubelet[2677]: I0417 23:51:42.194172 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5675cbd7-fdb0-43a9-beed-f1806791852c-varrun\") pod \"csi-node-driver-nxr77\" (UID: \"5675cbd7-fdb0-43a9-beed-f1806791852c\") " pod="calico-system/csi-node-driver-nxr77" Apr 17 23:51:42.195308 kubelet[2677]: I0417 23:51:42.194234 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48d6g\" (UniqueName: \"kubernetes.io/projected/5675cbd7-fdb0-43a9-beed-f1806791852c-kube-api-access-48d6g\") pod \"csi-node-driver-nxr77\" (UID: \"5675cbd7-fdb0-43a9-beed-f1806791852c\") " pod="calico-system/csi-node-driver-nxr77" Apr 17 23:51:42.195308 kubelet[2677]: I0417 23:51:42.194305 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5675cbd7-fdb0-43a9-beed-f1806791852c-socket-dir\") pod \"csi-node-driver-nxr77\" (UID: \"5675cbd7-fdb0-43a9-beed-f1806791852c\") " pod="calico-system/csi-node-driver-nxr77" Apr 17 23:51:42.195308 kubelet[2677]: I0417 23:51:42.194769 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5675cbd7-fdb0-43a9-beed-f1806791852c-registration-dir\") pod \"csi-node-driver-nxr77\" (UID: \"5675cbd7-fdb0-43a9-beed-f1806791852c\") " pod="calico-system/csi-node-driver-nxr77" Apr 17 23:51:42.196813 kubelet[2677]: E0417 23:51:42.196683 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.196813 kubelet[2677]: W0417 23:51:42.196701 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.196813 kubelet[2677]: E0417 23:51:42.196719 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.197116 kubelet[2677]: E0417 23:51:42.197107 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.197156 kubelet[2677]: W0417 23:51:42.197150 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.197190 kubelet[2677]: E0417 23:51:42.197184 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.201486 kubelet[2677]: E0417 23:51:42.201350 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.201713 kubelet[2677]: W0417 23:51:42.201636 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.201806 kubelet[2677]: E0417 23:51:42.201740 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.212642 kubelet[2677]: E0417 23:51:42.212560 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.212642 kubelet[2677]: W0417 23:51:42.212626 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.212642 kubelet[2677]: E0417 23:51:42.212638 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.222374 kubelet[2677]: E0417 23:51:42.222311 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:42.223273 containerd[1575]: time="2026-04-17T23:51:42.223200264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796fb65997-jh58k,Uid:a51facf8-dca1-4535-bcc8-c53a0c7aed5b,Namespace:calico-system,Attempt:0,}" Apr 17 23:51:42.260486 containerd[1575]: time="2026-04-17T23:51:42.259699725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:42.261819 containerd[1575]: time="2026-04-17T23:51:42.260766248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:42.261819 containerd[1575]: time="2026-04-17T23:51:42.260783015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:42.264428 containerd[1575]: time="2026-04-17T23:51:42.262909325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:42.293296 containerd[1575]: time="2026-04-17T23:51:42.293235578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cvbw7,Uid:657ff01c-246a-408c-b7c8-be54ff6f9f68,Namespace:calico-system,Attempt:0,}" Apr 17 23:51:42.296714 kubelet[2677]: E0417 23:51:42.296551 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.296714 kubelet[2677]: W0417 23:51:42.296591 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.296714 kubelet[2677]: E0417 23:51:42.296606 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.297012 kubelet[2677]: E0417 23:51:42.297004 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.297053 kubelet[2677]: W0417 23:51:42.297048 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.297156 kubelet[2677]: E0417 23:51:42.297087 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.299141 kubelet[2677]: E0417 23:51:42.299056 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.299201 kubelet[2677]: W0417 23:51:42.299154 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.299218 kubelet[2677]: E0417 23:51:42.299198 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.301749 kubelet[2677]: E0417 23:51:42.301632 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.302016 kubelet[2677]: W0417 23:51:42.301868 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.302016 kubelet[2677]: E0417 23:51:42.301881 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.302777 kubelet[2677]: E0417 23:51:42.302769 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.303122 kubelet[2677]: W0417 23:51:42.302959 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.303122 kubelet[2677]: E0417 23:51:42.302972 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.303382 kubelet[2677]: E0417 23:51:42.303375 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.303555 kubelet[2677]: W0417 23:51:42.303547 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.303656 kubelet[2677]: E0417 23:51:42.303621 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.303952 kubelet[2677]: E0417 23:51:42.303870 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.303952 kubelet[2677]: W0417 23:51:42.303876 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.303952 kubelet[2677]: E0417 23:51:42.303883 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.304033 kubelet[2677]: E0417 23:51:42.304028 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.304056 kubelet[2677]: W0417 23:51:42.304052 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.304086 kubelet[2677]: E0417 23:51:42.304081 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.304308 kubelet[2677]: E0417 23:51:42.304302 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.304339 kubelet[2677]: W0417 23:51:42.304335 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.304368 kubelet[2677]: E0417 23:51:42.304363 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.304742 kubelet[2677]: E0417 23:51:42.304659 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.304742 kubelet[2677]: W0417 23:51:42.304666 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.304742 kubelet[2677]: E0417 23:51:42.304673 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.304836 kubelet[2677]: E0417 23:51:42.304831 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.304860 kubelet[2677]: W0417 23:51:42.304856 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.304885 kubelet[2677]: E0417 23:51:42.304880 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.305102 kubelet[2677]: E0417 23:51:42.305096 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.305132 kubelet[2677]: W0417 23:51:42.305128 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.305165 kubelet[2677]: E0417 23:51:42.305160 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.305362 kubelet[2677]: E0417 23:51:42.305356 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.305546 kubelet[2677]: W0417 23:51:42.305391 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.305546 kubelet[2677]: E0417 23:51:42.305397 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.305649 kubelet[2677]: E0417 23:51:42.305643 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.305673 kubelet[2677]: W0417 23:51:42.305669 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.305705 kubelet[2677]: E0417 23:51:42.305699 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.306961 kubelet[2677]: E0417 23:51:42.306951 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.307015 kubelet[2677]: W0417 23:51:42.307009 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.307048 kubelet[2677]: E0417 23:51:42.307042 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.307283 kubelet[2677]: E0417 23:51:42.307277 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.307400 kubelet[2677]: W0417 23:51:42.307391 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.307517 kubelet[2677]: E0417 23:51:42.307509 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.307708 kubelet[2677]: E0417 23:51:42.307701 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.307826 kubelet[2677]: W0417 23:51:42.307742 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.307826 kubelet[2677]: E0417 23:51:42.307750 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.308183 kubelet[2677]: E0417 23:51:42.308125 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.308183 kubelet[2677]: W0417 23:51:42.308132 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.308183 kubelet[2677]: E0417 23:51:42.308138 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.308632 kubelet[2677]: E0417 23:51:42.308625 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.308785 kubelet[2677]: W0417 23:51:42.308670 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.308785 kubelet[2677]: E0417 23:51:42.308678 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.309230 kubelet[2677]: E0417 23:51:42.309156 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.309269 kubelet[2677]: W0417 23:51:42.309264 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.309381 kubelet[2677]: E0417 23:51:42.309375 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.309969 kubelet[2677]: E0417 23:51:42.309962 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.310014 kubelet[2677]: W0417 23:51:42.310009 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.310042 kubelet[2677]: E0417 23:51:42.310037 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.311785 kubelet[2677]: E0417 23:51:42.311749 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.311825 kubelet[2677]: W0417 23:51:42.311792 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.312054 kubelet[2677]: E0417 23:51:42.311834 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.318021 kubelet[2677]: E0417 23:51:42.317917 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.318021 kubelet[2677]: W0417 23:51:42.318027 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.318216 kubelet[2677]: E0417 23:51:42.318105 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.319772 kubelet[2677]: E0417 23:51:42.319756 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.319772 kubelet[2677]: W0417 23:51:42.319769 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.319969 kubelet[2677]: E0417 23:51:42.319781 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.320670 kubelet[2677]: E0417 23:51:42.320635 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.320670 kubelet[2677]: W0417 23:51:42.320644 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.320670 kubelet[2677]: E0417 23:51:42.320653 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.349032 kubelet[2677]: E0417 23:51:42.348958 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:42.349032 kubelet[2677]: W0417 23:51:42.348995 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:42.349032 kubelet[2677]: E0417 23:51:42.349016 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:42.357272 containerd[1575]: time="2026-04-17T23:51:42.356878021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:51:42.357272 containerd[1575]: time="2026-04-17T23:51:42.357171217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:51:42.357272 containerd[1575]: time="2026-04-17T23:51:42.357228490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:42.358103 containerd[1575]: time="2026-04-17T23:51:42.357870911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:51:42.358405 containerd[1575]: time="2026-04-17T23:51:42.358350771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796fb65997-jh58k,Uid:a51facf8-dca1-4535-bcc8-c53a0c7aed5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"366bd58d16f1934407b0e4963a79043844b9c480da1fad6d047b8b5018bf9902\"" Apr 17 23:51:42.360688 kubelet[2677]: E0417 23:51:42.359979 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:42.364100 containerd[1575]: time="2026-04-17T23:51:42.363892006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:51:42.445628 containerd[1575]: time="2026-04-17T23:51:42.445414810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cvbw7,Uid:657ff01c-246a-408c-b7c8-be54ff6f9f68,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fa83fc56c4ee4228a8daf1813e8d23197bf9f5c68d050a1a094ecf87f56f83c\"" Apr 17 23:51:44.166799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439282012.mount: Deactivated successfully. Apr 17 23:51:44.328417 kubelet[2677]: E0417 23:51:44.328225 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxr77" podUID="5675cbd7-fdb0-43a9-beed-f1806791852c" Apr 17 23:51:45.074192 containerd[1575]: time="2026-04-17T23:51:45.074093891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:45.075215 containerd[1575]: time="2026-04-17T23:51:45.075165244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:51:45.076325 containerd[1575]: time="2026-04-17T23:51:45.076175192Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:45.078795 containerd[1575]: time="2026-04-17T23:51:45.078737094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:45.079577 containerd[1575]: time="2026-04-17T23:51:45.079516454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.715422513s" Apr 17 23:51:45.079619 containerd[1575]: time="2026-04-17T23:51:45.079581168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:51:45.081111 containerd[1575]: time="2026-04-17T23:51:45.081043709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:51:45.095558 containerd[1575]: time="2026-04-17T23:51:45.095297949Z" level=info msg="CreateContainer within sandbox \"366bd58d16f1934407b0e4963a79043844b9c480da1fad6d047b8b5018bf9902\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:51:45.167185 containerd[1575]: time="2026-04-17T23:51:45.167035613Z" level=info msg="CreateContainer within sandbox \"366bd58d16f1934407b0e4963a79043844b9c480da1fad6d047b8b5018bf9902\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"897766ec275f22ecc52e4986c0521f78d1b600146aae1be2db3e7d696e34d13a\"" Apr 17 23:51:45.168326 containerd[1575]: time="2026-04-17T23:51:45.168303223Z" level=info msg="StartContainer for \"897766ec275f22ecc52e4986c0521f78d1b600146aae1be2db3e7d696e34d13a\"" Apr 17 23:51:45.210187 systemd[1]: run-containerd-runc-k8s.io-897766ec275f22ecc52e4986c0521f78d1b600146aae1be2db3e7d696e34d13a-runc.k6PrZS.mount: Deactivated successfully. Apr 17 23:51:45.265757 containerd[1575]: time="2026-04-17T23:51:45.265618980Z" level=info msg="StartContainer for \"897766ec275f22ecc52e4986c0521f78d1b600146aae1be2db3e7d696e34d13a\" returns successfully" Apr 17 23:51:45.454082 kubelet[2677]: E0417 23:51:45.453880 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:45.478902 kubelet[2677]: I0417 23:51:45.478795 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-796fb65997-jh58k" podStartSLOduration=1.761561178 podStartE2EDuration="4.478778688s" podCreationTimestamp="2026-04-17 23:51:41 +0000 UTC" firstStartedPulling="2026-04-17 23:51:42.363428814 +0000 UTC m=+17.136609335" lastFinishedPulling="2026-04-17 23:51:45.080646326 +0000 UTC m=+19.853826845" observedRunningTime="2026-04-17 23:51:45.477906253 +0000 UTC m=+20.251086780" watchObservedRunningTime="2026-04-17 23:51:45.478778688 +0000 UTC m=+20.251959226" Apr 17 23:51:45.516360 kubelet[2677]: E0417 23:51:45.516271 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.516360 kubelet[2677]: W0417 23:51:45.516317 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.516360 kubelet[2677]: E0417 23:51:45.516335 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.516736 kubelet[2677]: E0417 23:51:45.516706 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.516736 kubelet[2677]: W0417 23:51:45.516713 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.516736 kubelet[2677]: E0417 23:51:45.516722 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.517003 kubelet[2677]: E0417 23:51:45.516952 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.517003 kubelet[2677]: W0417 23:51:45.516983 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.517003 kubelet[2677]: E0417 23:51:45.516991 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.517229 kubelet[2677]: E0417 23:51:45.517201 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.517229 kubelet[2677]: W0417 23:51:45.517228 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.517266 kubelet[2677]: E0417 23:51:45.517234 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.517994 kubelet[2677]: E0417 23:51:45.517934 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.517994 kubelet[2677]: W0417 23:51:45.517948 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.517994 kubelet[2677]: E0417 23:51:45.517963 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.518256 kubelet[2677]: E0417 23:51:45.518204 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.518256 kubelet[2677]: W0417 23:51:45.518210 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.518256 kubelet[2677]: E0417 23:51:45.518216 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.519234 kubelet[2677]: E0417 23:51:45.518578 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.519234 kubelet[2677]: W0417 23:51:45.518589 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.519234 kubelet[2677]: E0417 23:51:45.518851 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.519578 kubelet[2677]: E0417 23:51:45.519363 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.519578 kubelet[2677]: W0417 23:51:45.519369 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.519578 kubelet[2677]: E0417 23:51:45.519377 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.519629 kubelet[2677]: E0417 23:51:45.519620 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.519629 kubelet[2677]: W0417 23:51:45.519625 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.519682 kubelet[2677]: E0417 23:51:45.519645 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.519920 kubelet[2677]: E0417 23:51:45.519845 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.519920 kubelet[2677]: W0417 23:51:45.519852 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.519920 kubelet[2677]: E0417 23:51:45.519863 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.520282 kubelet[2677]: E0417 23:51:45.520247 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.520282 kubelet[2677]: W0417 23:51:45.520275 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.520282 kubelet[2677]: E0417 23:51:45.520281 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.520783 kubelet[2677]: E0417 23:51:45.520663 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.520783 kubelet[2677]: W0417 23:51:45.520670 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.520783 kubelet[2677]: E0417 23:51:45.520676 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.520952 kubelet[2677]: E0417 23:51:45.520923 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.520952 kubelet[2677]: W0417 23:51:45.520950 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.520992 kubelet[2677]: E0417 23:51:45.520956 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.521234 kubelet[2677]: E0417 23:51:45.521190 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.521234 kubelet[2677]: W0417 23:51:45.521219 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.521234 kubelet[2677]: E0417 23:51:45.521224 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.521588 kubelet[2677]: E0417 23:51:45.521560 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.521588 kubelet[2677]: W0417 23:51:45.521586 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.521632 kubelet[2677]: E0417 23:51:45.521592 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.548987 kubelet[2677]: E0417 23:51:45.548894 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.548987 kubelet[2677]: W0417 23:51:45.548937 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.548987 kubelet[2677]: E0417 23:51:45.548958 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.549307 kubelet[2677]: E0417 23:51:45.549254 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.549307 kubelet[2677]: W0417 23:51:45.549269 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.549307 kubelet[2677]: E0417 23:51:45.549289 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.549836 kubelet[2677]: E0417 23:51:45.549799 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.549836 kubelet[2677]: W0417 23:51:45.549809 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.549836 kubelet[2677]: E0417 23:51:45.549819 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.550613 kubelet[2677]: E0417 23:51:45.550271 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.550871 kubelet[2677]: W0417 23:51:45.550673 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.550871 kubelet[2677]: E0417 23:51:45.550712 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.551359 kubelet[2677]: E0417 23:51:45.551166 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.551690 kubelet[2677]: W0417 23:51:45.551514 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.551690 kubelet[2677]: E0417 23:51:45.551546 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.551996 kubelet[2677]: E0417 23:51:45.551947 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.551996 kubelet[2677]: W0417 23:51:45.551977 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.551996 kubelet[2677]: E0417 23:51:45.551984 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.552267 kubelet[2677]: E0417 23:51:45.552230 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.552267 kubelet[2677]: W0417 23:51:45.552260 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.552267 kubelet[2677]: E0417 23:51:45.552270 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.552740 kubelet[2677]: E0417 23:51:45.552710 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.552740 kubelet[2677]: W0417 23:51:45.552737 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.552808 kubelet[2677]: E0417 23:51:45.552743 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.553085 kubelet[2677]: E0417 23:51:45.553057 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.553085 kubelet[2677]: W0417 23:51:45.553083 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.553123 kubelet[2677]: E0417 23:51:45.553089 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.554006 kubelet[2677]: E0417 23:51:45.553957 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.554006 kubelet[2677]: W0417 23:51:45.553997 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.554093 kubelet[2677]: E0417 23:51:45.554011 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.554616 kubelet[2677]: E0417 23:51:45.554411 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.554811 kubelet[2677]: W0417 23:51:45.554750 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.554811 kubelet[2677]: E0417 23:51:45.554789 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.555763 kubelet[2677]: E0417 23:51:45.555313 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.555763 kubelet[2677]: W0417 23:51:45.555344 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.555763 kubelet[2677]: E0417 23:51:45.555352 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.556112 kubelet[2677]: E0417 23:51:45.556073 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.556112 kubelet[2677]: W0417 23:51:45.556110 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.556148 kubelet[2677]: E0417 23:51:45.556123 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.556561 kubelet[2677]: E0417 23:51:45.556524 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.556561 kubelet[2677]: W0417 23:51:45.556552 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.556561 kubelet[2677]: E0417 23:51:45.556558 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.556844 kubelet[2677]: E0417 23:51:45.556793 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.556844 kubelet[2677]: W0417 23:51:45.556819 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.556844 kubelet[2677]: E0417 23:51:45.556824 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.557263 kubelet[2677]: E0417 23:51:45.557203 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.557263 kubelet[2677]: W0417 23:51:45.557233 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.557263 kubelet[2677]: E0417 23:51:45.557239 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.558065 kubelet[2677]: E0417 23:51:45.558019 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.558065 kubelet[2677]: W0417 23:51:45.558050 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.558065 kubelet[2677]: E0417 23:51:45.558065 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:45.558506 kubelet[2677]: E0417 23:51:45.558369 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:45.558506 kubelet[2677]: W0417 23:51:45.558380 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:45.558506 kubelet[2677]: E0417 23:51:45.558421 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.328647 kubelet[2677]: E0417 23:51:46.328424 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxr77" podUID="5675cbd7-fdb0-43a9-beed-f1806791852c" Apr 17 23:51:46.454801 kubelet[2677]: I0417 23:51:46.454756 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:51:46.455156 kubelet[2677]: E0417 23:51:46.455120 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:46.530970 kubelet[2677]: E0417 23:51:46.530803 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.530970 kubelet[2677]: W0417 23:51:46.530846 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.530970 kubelet[2677]: E0417 23:51:46.530869 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.531418 kubelet[2677]: E0417 23:51:46.531386 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.531497 kubelet[2677]: W0417 23:51:46.531420 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.531497 kubelet[2677]: E0417 23:51:46.531430 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.531851 kubelet[2677]: E0417 23:51:46.531739 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.531851 kubelet[2677]: W0417 23:51:46.531766 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.531851 kubelet[2677]: E0417 23:51:46.531774 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.531937 kubelet[2677]: E0417 23:51:46.531901 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.531937 kubelet[2677]: W0417 23:51:46.531906 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.531937 kubelet[2677]: E0417 23:51:46.531912 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.532055 kubelet[2677]: E0417 23:51:46.532038 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.532055 kubelet[2677]: W0417 23:51:46.532045 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.532055 kubelet[2677]: E0417 23:51:46.532051 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.532384 kubelet[2677]: E0417 23:51:46.532166 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.532384 kubelet[2677]: W0417 23:51:46.532171 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.532384 kubelet[2677]: E0417 23:51:46.532176 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.532589 kubelet[2677]: E0417 23:51:46.532429 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.532589 kubelet[2677]: W0417 23:51:46.532488 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.532589 kubelet[2677]: E0417 23:51:46.532495 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.532962 kubelet[2677]: E0417 23:51:46.532614 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.532962 kubelet[2677]: W0417 23:51:46.532619 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.532962 kubelet[2677]: E0417 23:51:46.532625 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.532962 kubelet[2677]: E0417 23:51:46.532779 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.532962 kubelet[2677]: W0417 23:51:46.532785 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.532962 kubelet[2677]: E0417 23:51:46.532790 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.532962 kubelet[2677]: E0417 23:51:46.532926 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.532962 kubelet[2677]: W0417 23:51:46.532931 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.532962 kubelet[2677]: E0417 23:51:46.532936 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.533318 kubelet[2677]: E0417 23:51:46.533286 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.533485 kubelet[2677]: W0417 23:51:46.533319 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.533485 kubelet[2677]: E0417 23:51:46.533326 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.534394 kubelet[2677]: E0417 23:51:46.534234 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.534537 kubelet[2677]: W0417 23:51:46.534421 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.534761 kubelet[2677]: E0417 23:51:46.534741 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.536628 kubelet[2677]: E0417 23:51:46.536090 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.536805 kubelet[2677]: W0417 23:51:46.536754 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.536805 kubelet[2677]: E0417 23:51:46.536792 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.537787 kubelet[2677]: E0417 23:51:46.537754 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.537887 kubelet[2677]: W0417 23:51:46.537788 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.537887 kubelet[2677]: E0417 23:51:46.537800 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.538860 kubelet[2677]: E0417 23:51:46.538733 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.539185 kubelet[2677]: W0417 23:51:46.538907 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.539544 kubelet[2677]: E0417 23:51:46.539275 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.558370 kubelet[2677]: E0417 23:51:46.558270 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.558370 kubelet[2677]: W0417 23:51:46.558316 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.558370 kubelet[2677]: E0417 23:51:46.558374 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.558801 kubelet[2677]: E0417 23:51:46.558759 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.558801 kubelet[2677]: W0417 23:51:46.558799 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.558841 kubelet[2677]: E0417 23:51:46.558811 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.559136 kubelet[2677]: E0417 23:51:46.559082 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.559136 kubelet[2677]: W0417 23:51:46.559089 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.559136 kubelet[2677]: E0417 23:51:46.559097 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.559635 kubelet[2677]: E0417 23:51:46.559603 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.559666 kubelet[2677]: W0417 23:51:46.559635 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.559666 kubelet[2677]: E0417 23:51:46.559648 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.560067 kubelet[2677]: E0417 23:51:46.560024 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.560067 kubelet[2677]: W0417 23:51:46.560055 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.560067 kubelet[2677]: E0417 23:51:46.560066 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.560595 kubelet[2677]: E0417 23:51:46.560567 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.560595 kubelet[2677]: W0417 23:51:46.560595 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.560648 kubelet[2677]: E0417 23:51:46.560602 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.560880 kubelet[2677]: E0417 23:51:46.560851 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.560880 kubelet[2677]: W0417 23:51:46.560878 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.560932 kubelet[2677]: E0417 23:51:46.560883 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.561201 kubelet[2677]: E0417 23:51:46.561152 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.561201 kubelet[2677]: W0417 23:51:46.561181 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.561201 kubelet[2677]: E0417 23:51:46.561187 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.561510 kubelet[2677]: E0417 23:51:46.561483 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.561510 kubelet[2677]: W0417 23:51:46.561510 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.561546 kubelet[2677]: E0417 23:51:46.561516 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.561941 kubelet[2677]: E0417 23:51:46.561912 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.561941 kubelet[2677]: W0417 23:51:46.561939 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.561997 kubelet[2677]: E0417 23:51:46.561952 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.562289 kubelet[2677]: E0417 23:51:46.562262 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.562309 kubelet[2677]: W0417 23:51:46.562289 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.562309 kubelet[2677]: E0417 23:51:46.562295 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.562693 kubelet[2677]: E0417 23:51:46.562666 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.562693 kubelet[2677]: W0417 23:51:46.562692 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.562739 kubelet[2677]: E0417 23:51:46.562698 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.563000 kubelet[2677]: E0417 23:51:46.562956 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.563000 kubelet[2677]: W0417 23:51:46.562984 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.563000 kubelet[2677]: E0417 23:51:46.562991 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.563400 kubelet[2677]: E0417 23:51:46.563334 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.563426 kubelet[2677]: W0417 23:51:46.563401 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.563426 kubelet[2677]: E0417 23:51:46.563411 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.563789 kubelet[2677]: E0417 23:51:46.563761 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.563789 kubelet[2677]: W0417 23:51:46.563787 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.563824 kubelet[2677]: E0417 23:51:46.563794 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.564186 kubelet[2677]: E0417 23:51:46.564160 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.564186 kubelet[2677]: W0417 23:51:46.564186 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.564235 kubelet[2677]: E0417 23:51:46.564191 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.564563 kubelet[2677]: E0417 23:51:46.564537 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.564563 kubelet[2677]: W0417 23:51:46.564563 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.564610 kubelet[2677]: E0417 23:51:46.564569 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.565051 kubelet[2677]: E0417 23:51:46.565022 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:51:46.565051 kubelet[2677]: W0417 23:51:46.565051 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:51:46.565100 kubelet[2677]: E0417 23:51:46.565058 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:51:46.715669 containerd[1575]: time="2026-04-17T23:51:46.715171578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:46.717676 containerd[1575]: time="2026-04-17T23:51:46.717219783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:51:46.719114 containerd[1575]: time="2026-04-17T23:51:46.719039422Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:46.722393 containerd[1575]: time="2026-04-17T23:51:46.722256788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:46.723035 containerd[1575]: time="2026-04-17T23:51:46.722954856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.641776877s" Apr 17 23:51:46.723035 containerd[1575]: time="2026-04-17T23:51:46.723017594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:51:46.730208 containerd[1575]: time="2026-04-17T23:51:46.730102631Z" level=info msg="CreateContainer within sandbox \"8fa83fc56c4ee4228a8daf1813e8d23197bf9f5c68d050a1a094ecf87f56f83c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:51:46.747965 containerd[1575]: time="2026-04-17T23:51:46.747891748Z" level=info msg="CreateContainer within sandbox \"8fa83fc56c4ee4228a8daf1813e8d23197bf9f5c68d050a1a094ecf87f56f83c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"67dd8bd65b149455659bebadd124733d32601272b7721df5a97b1438748edd64\"" Apr 17 23:51:46.749967 containerd[1575]: time="2026-04-17T23:51:46.749912537Z" level=info msg="StartContainer for \"67dd8bd65b149455659bebadd124733d32601272b7721df5a97b1438748edd64\"" Apr 17 23:51:46.818788 containerd[1575]: time="2026-04-17T23:51:46.818654659Z" level=info msg="StartContainer for \"67dd8bd65b149455659bebadd124733d32601272b7721df5a97b1438748edd64\" returns successfully" Apr 17 23:51:46.864254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67dd8bd65b149455659bebadd124733d32601272b7721df5a97b1438748edd64-rootfs.mount: Deactivated successfully. Apr 17 23:51:46.869308 containerd[1575]: time="2026-04-17T23:51:46.869221647Z" level=info msg="shim disconnected" id=67dd8bd65b149455659bebadd124733d32601272b7721df5a97b1438748edd64 namespace=k8s.io Apr 17 23:51:46.869308 containerd[1575]: time="2026-04-17T23:51:46.869284180Z" level=warning msg="cleaning up after shim disconnected" id=67dd8bd65b149455659bebadd124733d32601272b7721df5a97b1438748edd64 namespace=k8s.io Apr 17 23:51:46.869308 containerd[1575]: time="2026-04-17T23:51:46.869293373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:51:47.179762 update_engine[1562]: I20260417 23:51:47.179522 1562 update_attempter.cc:509] Updating boot flags... Apr 17 23:51:47.209565 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3464) Apr 17 23:51:47.240619 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3464) Apr 17 23:51:47.461944 containerd[1575]: time="2026-04-17T23:51:47.461504580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:51:48.179542 kubelet[2677]: I0417 23:51:48.179203 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:51:48.180103 kubelet[2677]: E0417 23:51:48.179918 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:48.327534 kubelet[2677]: E0417 23:51:48.327170 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxr77" podUID="5675cbd7-fdb0-43a9-beed-f1806791852c" Apr 17 23:51:48.463112 kubelet[2677]: E0417 23:51:48.462849 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:51:50.328326 kubelet[2677]: E0417 23:51:50.328100 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxr77" podUID="5675cbd7-fdb0-43a9-beed-f1806791852c" Apr 17 23:51:52.328672 kubelet[2677]: E0417 23:51:52.328292 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxr77" podUID="5675cbd7-fdb0-43a9-beed-f1806791852c" Apr 17 23:51:54.328214 kubelet[2677]: E0417 23:51:54.327993 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxr77" podUID="5675cbd7-fdb0-43a9-beed-f1806791852c" Apr 17 23:51:55.789343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount545808028.mount: Deactivated successfully. Apr 17 23:51:55.911081 containerd[1575]: time="2026-04-17T23:51:55.910925531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:55.940587 containerd[1575]: time="2026-04-17T23:51:55.940384672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:51:55.942022 containerd[1575]: time="2026-04-17T23:51:55.941922812Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:55.946343 containerd[1575]: time="2026-04-17T23:51:55.946305905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:51:55.947340 containerd[1575]: time="2026-04-17T23:51:55.947221558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.485643687s" Apr 17 23:51:55.947340 containerd[1575]: time="2026-04-17T23:51:55.947276048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:51:55.953935 containerd[1575]: time="2026-04-17T23:51:55.953773998Z" level=info msg="CreateContainer within sandbox \"8fa83fc56c4ee4228a8daf1813e8d23197bf9f5c68d050a1a094ecf87f56f83c\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:51:56.024604 containerd[1575]: time="2026-04-17T23:51:56.024416491Z" level=info msg="CreateContainer within sandbox \"8fa83fc56c4ee4228a8daf1813e8d23197bf9f5c68d050a1a094ecf87f56f83c\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9cda62111761044a2341ee197201ed898e23fdabce36a2d8d4ca78cf9b9e0431\"" Apr 17 23:51:56.025416 containerd[1575]: time="2026-04-17T23:51:56.025262600Z" level=info msg="StartContainer for \"9cda62111761044a2341ee197201ed898e23fdabce36a2d8d4ca78cf9b9e0431\"" Apr 17 23:51:56.157638 containerd[1575]: time="2026-04-17T23:51:56.157406776Z" level=info msg="StartContainer for \"9cda62111761044a2341ee197201ed898e23fdabce36a2d8d4ca78cf9b9e0431\" returns successfully" Apr 17 23:51:56.256256 containerd[1575]: time="2026-04-17T23:51:56.255767489Z" level=info msg="shim disconnected" id=9cda62111761044a2341ee197201ed898e23fdabce36a2d8d4ca78cf9b9e0431 namespace=k8s.io Apr 17 23:51:56.257174 containerd[1575]: time="2026-04-17T23:51:56.256279391Z" level=warning msg="cleaning up after shim disconnected" id=9cda62111761044a2341ee197201ed898e23fdabce36a2d8d4ca78cf9b9e0431 namespace=k8s.io Apr 17 23:51:56.257174 containerd[1575]: time="2026-04-17T23:51:56.256298673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:51:56.328370 kubelet[2677]: E0417 23:51:56.328283 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxr77" podUID="5675cbd7-fdb0-43a9-beed-f1806791852c" Apr 17 23:51:56.489636 containerd[1575]: time="2026-04-17T23:51:56.489245746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:51:56.790876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cda62111761044a2341ee197201ed898e23fdabce36a2d8d4ca78cf9b9e0431-rootfs.mount: Deactivated successfully. Apr 17 23:51:58.328064 kubelet[2677]: E0417 23:51:58.327762 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxr77" podUID="5675cbd7-fdb0-43a9-beed-f1806791852c" Apr 17 23:52:00.299127 containerd[1575]: time="2026-04-17T23:52:00.298919611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:00.301116 containerd[1575]: time="2026-04-17T23:52:00.300992074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:52:00.302825 containerd[1575]: time="2026-04-17T23:52:00.302750876Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:00.305350 containerd[1575]: time="2026-04-17T23:52:00.305263267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:00.306311 containerd[1575]: time="2026-04-17T23:52:00.306254581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.816882253s" Apr 17 23:52:00.306354 containerd[1575]: time="2026-04-17T23:52:00.306321999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:52:00.313339 containerd[1575]: time="2026-04-17T23:52:00.313187359Z" level=info msg="CreateContainer within sandbox \"8fa83fc56c4ee4228a8daf1813e8d23197bf9f5c68d050a1a094ecf87f56f83c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:52:00.327712 kubelet[2677]: E0417 23:52:00.327562 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxr77" podUID="5675cbd7-fdb0-43a9-beed-f1806791852c" Apr 17 23:52:00.332755 containerd[1575]: time="2026-04-17T23:52:00.332716110Z" level=info msg="CreateContainer within sandbox \"8fa83fc56c4ee4228a8daf1813e8d23197bf9f5c68d050a1a094ecf87f56f83c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"97d24ce85111a3e2bf78cc7864581d9c94c41f8cb8a9e4b427ea52d0da6c45c3\"" Apr 17 23:52:00.333779 containerd[1575]: time="2026-04-17T23:52:00.333693594Z" level=info msg="StartContainer for \"97d24ce85111a3e2bf78cc7864581d9c94c41f8cb8a9e4b427ea52d0da6c45c3\"" Apr 17 23:52:00.424305 containerd[1575]: time="2026-04-17T23:52:00.424186499Z" level=info msg="StartContainer for \"97d24ce85111a3e2bf78cc7864581d9c94c41f8cb8a9e4b427ea52d0da6c45c3\" returns successfully" Apr 17 23:52:01.132227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97d24ce85111a3e2bf78cc7864581d9c94c41f8cb8a9e4b427ea52d0da6c45c3-rootfs.mount: Deactivated successfully. Apr 17 23:52:01.138263 containerd[1575]: time="2026-04-17T23:52:01.138059763Z" level=info msg="shim disconnected" id=97d24ce85111a3e2bf78cc7864581d9c94c41f8cb8a9e4b427ea52d0da6c45c3 namespace=k8s.io Apr 17 23:52:01.138413 containerd[1575]: time="2026-04-17T23:52:01.138212900Z" level=warning msg="cleaning up after shim disconnected" id=97d24ce85111a3e2bf78cc7864581d9c94c41f8cb8a9e4b427ea52d0da6c45c3 namespace=k8s.io Apr 17 23:52:01.138413 containerd[1575]: time="2026-04-17T23:52:01.138298966Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:52:01.181147 kubelet[2677]: I0417 23:52:01.179008 2677 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:52:01.342300 kubelet[2677]: I0417 23:52:01.341981 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wztxb\" (UniqueName: \"kubernetes.io/projected/a23d8751-4902-4d1d-8ccf-8b84b4c25b8b-kube-api-access-wztxb\") pod \"coredns-674b8bbfcf-qpwz2\" (UID: \"a23d8751-4902-4d1d-8ccf-8b84b4c25b8b\") " pod="kube-system/coredns-674b8bbfcf-qpwz2" Apr 17 23:52:01.342300 kubelet[2677]: I0417 23:52:01.342017 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a23d8751-4902-4d1d-8ccf-8b84b4c25b8b-config-volume\") pod \"coredns-674b8bbfcf-qpwz2\" (UID: \"a23d8751-4902-4d1d-8ccf-8b84b4c25b8b\") " pod="kube-system/coredns-674b8bbfcf-qpwz2" Apr 17 23:52:01.342300 kubelet[2677]: I0417 23:52:01.342032 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9tcg\" (UniqueName: \"kubernetes.io/projected/db5c2130-0e95-4916-badc-e8ed1ee5a320-kube-api-access-w9tcg\") pod \"coredns-674b8bbfcf-k6w8b\" (UID: \"db5c2130-0e95-4916-badc-e8ed1ee5a320\") " pod="kube-system/coredns-674b8bbfcf-k6w8b" Apr 17 23:52:01.342300 kubelet[2677]: I0417 23:52:01.342190 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db5c2130-0e95-4916-badc-e8ed1ee5a320-config-volume\") pod \"coredns-674b8bbfcf-k6w8b\" (UID: \"db5c2130-0e95-4916-badc-e8ed1ee5a320\") " pod="kube-system/coredns-674b8bbfcf-k6w8b" Apr 17 23:52:01.443825 kubelet[2677]: I0417 23:52:01.442850 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8c020ef-0550-43e0-8dde-85cd19073ed7-whisker-backend-key-pair\") pod \"whisker-cbd55db78-5z6c4\" (UID: \"e8c020ef-0550-43e0-8dde-85cd19073ed7\") " pod="calico-system/whisker-cbd55db78-5z6c4" Apr 17 23:52:01.443825 kubelet[2677]: I0417 23:52:01.442924 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhhpk\" (UniqueName: \"kubernetes.io/projected/e8c020ef-0550-43e0-8dde-85cd19073ed7-kube-api-access-hhhpk\") pod \"whisker-cbd55db78-5z6c4\" (UID: \"e8c020ef-0550-43e0-8dde-85cd19073ed7\") " pod="calico-system/whisker-cbd55db78-5z6c4" Apr 17 23:52:01.443825 kubelet[2677]: I0417 23:52:01.443004 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304b7f98-005c-457c-9b59-72da9a1db780-tigera-ca-bundle\") pod \"calico-kube-controllers-d9f98598b-s7zb8\" (UID: \"304b7f98-005c-457c-9b59-72da9a1db780\") " pod="calico-system/calico-kube-controllers-d9f98598b-s7zb8" Apr 17 23:52:01.443825 kubelet[2677]: I0417 23:52:01.443032 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be1918a4-9e90-448a-95bf-d09779e58ce9-config\") pod \"goldmane-5b85766d88-xtkhw\" (UID: \"be1918a4-9e90-448a-95bf-d09779e58ce9\") " pod="calico-system/goldmane-5b85766d88-xtkhw" Apr 17 23:52:01.443825 kubelet[2677]: I0417 23:52:01.443103 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr6sj\" (UniqueName: \"kubernetes.io/projected/ea532de7-7ec9-4f7c-9d00-97d7c422c363-kube-api-access-qr6sj\") pod \"calico-apiserver-5b9f7b68ff-t4zc8\" (UID: \"ea532de7-7ec9-4f7c-9d00-97d7c422c363\") " pod="calico-system/calico-apiserver-5b9f7b68ff-t4zc8" Apr 17 23:52:01.444517 kubelet[2677]: I0417 23:52:01.443129 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xr2\" (UniqueName: \"kubernetes.io/projected/304b7f98-005c-457c-9b59-72da9a1db780-kube-api-access-x6xr2\") pod \"calico-kube-controllers-d9f98598b-s7zb8\" (UID: \"304b7f98-005c-457c-9b59-72da9a1db780\") " pod="calico-system/calico-kube-controllers-d9f98598b-s7zb8" Apr 17 23:52:01.444517 kubelet[2677]: I0417 23:52:01.443147 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1918a4-9e90-448a-95bf-d09779e58ce9-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-xtkhw\" (UID: \"be1918a4-9e90-448a-95bf-d09779e58ce9\") " pod="calico-system/goldmane-5b85766d88-xtkhw" Apr 17 23:52:01.444517 kubelet[2677]: I0417 23:52:01.443165 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c020ef-0550-43e0-8dde-85cd19073ed7-whisker-ca-bundle\") pod \"whisker-cbd55db78-5z6c4\" (UID: \"e8c020ef-0550-43e0-8dde-85cd19073ed7\") " pod="calico-system/whisker-cbd55db78-5z6c4" Apr 17 23:52:01.444517 kubelet[2677]: I0417 23:52:01.443182 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m2wj\" (UniqueName: \"kubernetes.io/projected/be1918a4-9e90-448a-95bf-d09779e58ce9-kube-api-access-8m2wj\") pod \"goldmane-5b85766d88-xtkhw\" (UID: \"be1918a4-9e90-448a-95bf-d09779e58ce9\") " pod="calico-system/goldmane-5b85766d88-xtkhw" Apr 17 23:52:01.444517 kubelet[2677]: I0417 23:52:01.443197 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/08184ab9-9f04-4144-ba8e-b4322834631d-calico-apiserver-certs\") pod \"calico-apiserver-5b9f7b68ff-rghf4\" (UID: \"08184ab9-9f04-4144-ba8e-b4322834631d\") " pod="calico-system/calico-apiserver-5b9f7b68ff-rghf4" Apr 17 23:52:01.444615 kubelet[2677]: I0417 23:52:01.443209 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/be1918a4-9e90-448a-95bf-d09779e58ce9-goldmane-key-pair\") pod \"goldmane-5b85766d88-xtkhw\" (UID: \"be1918a4-9e90-448a-95bf-d09779e58ce9\") " pod="calico-system/goldmane-5b85766d88-xtkhw" Apr 17 23:52:01.444615 kubelet[2677]: I0417 23:52:01.443225 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea532de7-7ec9-4f7c-9d00-97d7c422c363-calico-apiserver-certs\") pod \"calico-apiserver-5b9f7b68ff-t4zc8\" (UID: \"ea532de7-7ec9-4f7c-9d00-97d7c422c363\") " pod="calico-system/calico-apiserver-5b9f7b68ff-t4zc8" Apr 17 23:52:01.444615 kubelet[2677]: I0417 23:52:01.443243 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksq48\" (UniqueName: \"kubernetes.io/projected/08184ab9-9f04-4144-ba8e-b4322834631d-kube-api-access-ksq48\") pod \"calico-apiserver-5b9f7b68ff-rghf4\" (UID: \"08184ab9-9f04-4144-ba8e-b4322834631d\") " pod="calico-system/calico-apiserver-5b9f7b68ff-rghf4" Apr 17 23:52:01.444615 kubelet[2677]: I0417 23:52:01.443257 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e8c020ef-0550-43e0-8dde-85cd19073ed7-nginx-config\") pod \"whisker-cbd55db78-5z6c4\" (UID: \"e8c020ef-0550-43e0-8dde-85cd19073ed7\") " pod="calico-system/whisker-cbd55db78-5z6c4" Apr 17 23:52:01.532303 kubelet[2677]: E0417 23:52:01.532138 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:01.533240 containerd[1575]: time="2026-04-17T23:52:01.533169904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qpwz2,Uid:a23d8751-4902-4d1d-8ccf-8b84b4c25b8b,Namespace:kube-system,Attempt:0,}" Apr 17 23:52:01.566556 kubelet[2677]: E0417 23:52:01.563523 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:01.585637 containerd[1575]: time="2026-04-17T23:52:01.585549546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k6w8b,Uid:db5c2130-0e95-4916-badc-e8ed1ee5a320,Namespace:kube-system,Attempt:0,}" Apr 17 23:52:01.597006 containerd[1575]: time="2026-04-17T23:52:01.596863438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbd55db78-5z6c4,Uid:e8c020ef-0550-43e0-8dde-85cd19073ed7,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:01.598641 containerd[1575]: time="2026-04-17T23:52:01.598581695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xtkhw,Uid:be1918a4-9e90-448a-95bf-d09779e58ce9,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:01.598777 containerd[1575]: time="2026-04-17T23:52:01.598735936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f7b68ff-t4zc8,Uid:ea532de7-7ec9-4f7c-9d00-97d7c422c363,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:01.605566 containerd[1575]: time="2026-04-17T23:52:01.605522453Z" level=info msg="CreateContainer within sandbox \"8fa83fc56c4ee4228a8daf1813e8d23197bf9f5c68d050a1a094ecf87f56f83c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:52:01.606183 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:50444.service - OpenSSH per-connection server daemon (10.0.0.1:50444). Apr 17 23:52:01.669105 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 50444 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:01.673683 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:01.674779 containerd[1575]: time="2026-04-17T23:52:01.674643869Z" level=info msg="CreateContainer within sandbox \"8fa83fc56c4ee4228a8daf1813e8d23197bf9f5c68d050a1a094ecf87f56f83c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b2be17223bce140137dfc56b0b1a4c8d03485b2d102b9a56262b73028115ce3e\"" Apr 17 23:52:01.683824 containerd[1575]: time="2026-04-17T23:52:01.682830674Z" level=info msg="StartContainer for \"b2be17223bce140137dfc56b0b1a4c8d03485b2d102b9a56262b73028115ce3e\"" Apr 17 23:52:01.696546 systemd-logind[1553]: New session 8 of user core. Apr 17 23:52:01.702087 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:52:01.822791 containerd[1575]: time="2026-04-17T23:52:01.822704634Z" level=info msg="StartContainer for \"b2be17223bce140137dfc56b0b1a4c8d03485b2d102b9a56262b73028115ce3e\" returns successfully" Apr 17 23:52:01.886892 containerd[1575]: time="2026-04-17T23:52:01.886789199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f7b68ff-rghf4,Uid:08184ab9-9f04-4144-ba8e-b4322834631d,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:01.892636 containerd[1575]: time="2026-04-17T23:52:01.891926575Z" level=error msg="Failed to destroy network for sandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.894905 containerd[1575]: time="2026-04-17T23:52:01.894756285Z" level=error msg="Failed to destroy network for sandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.895588 containerd[1575]: time="2026-04-17T23:52:01.895393937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9f98598b-s7zb8,Uid:304b7f98-005c-457c-9b59-72da9a1db780,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:01.895716 containerd[1575]: time="2026-04-17T23:52:01.895654246Z" level=error msg="encountered an error cleaning up failed sandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.895759 containerd[1575]: time="2026-04-17T23:52:01.895731394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k6w8b,Uid:db5c2130-0e95-4916-badc-e8ed1ee5a320,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.895826 containerd[1575]: time="2026-04-17T23:52:01.895228761Z" level=error msg="encountered an error cleaning up failed sandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.895847 containerd[1575]: time="2026-04-17T23:52:01.895825804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xtkhw,Uid:be1918a4-9e90-448a-95bf-d09779e58ce9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.902415 kubelet[2677]: E0417 23:52:01.902253 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.902415 kubelet[2677]: E0417 23:52:01.902334 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-xtkhw" Apr 17 23:52:01.902415 kubelet[2677]: E0417 23:52:01.902355 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-xtkhw" Apr 17 23:52:01.902773 kubelet[2677]: E0417 23:52:01.902406 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-xtkhw_calico-system(be1918a4-9e90-448a-95bf-d09779e58ce9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-xtkhw_calico-system(be1918a4-9e90-448a-95bf-d09779e58ce9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-xtkhw" podUID="be1918a4-9e90-448a-95bf-d09779e58ce9" Apr 17 23:52:01.903228 kubelet[2677]: E0417 23:52:01.902980 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.903228 kubelet[2677]: E0417 23:52:01.903137 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k6w8b" Apr 17 23:52:01.903570 kubelet[2677]: E0417 23:52:01.903153 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k6w8b" Apr 17 23:52:01.904425 kubelet[2677]: E0417 23:52:01.903262 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-k6w8b_kube-system(db5c2130-0e95-4916-badc-e8ed1ee5a320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-k6w8b_kube-system(db5c2130-0e95-4916-badc-e8ed1ee5a320)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-k6w8b" podUID="db5c2130-0e95-4916-badc-e8ed1ee5a320" Apr 17 23:52:01.921027 sshd[3643]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:01.926576 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:50444.service: Deactivated successfully. Apr 17 23:52:01.930904 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:52:01.933087 systemd-logind[1553]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:52:01.935235 systemd-logind[1553]: Removed session 8. Apr 17 23:52:01.979396 containerd[1575]: time="2026-04-17T23:52:01.979301217Z" level=error msg="Failed to destroy network for sandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.981033 containerd[1575]: time="2026-04-17T23:52:01.980239716Z" level=error msg="encountered an error cleaning up failed sandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.981033 containerd[1575]: time="2026-04-17T23:52:01.980282116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qpwz2,Uid:a23d8751-4902-4d1d-8ccf-8b84b4c25b8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.981126 kubelet[2677]: E0417 23:52:01.980574 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.981126 kubelet[2677]: E0417 23:52:01.980632 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qpwz2" Apr 17 23:52:01.981126 kubelet[2677]: E0417 23:52:01.980651 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qpwz2" Apr 17 23:52:01.981201 kubelet[2677]: E0417 23:52:01.980694 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qpwz2_kube-system(a23d8751-4902-4d1d-8ccf-8b84b4c25b8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qpwz2_kube-system(a23d8751-4902-4d1d-8ccf-8b84b4c25b8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qpwz2" podUID="a23d8751-4902-4d1d-8ccf-8b84b4c25b8b" Apr 17 23:52:01.989802 containerd[1575]: time="2026-04-17T23:52:01.989409930Z" level=error msg="Failed to destroy network for sandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.990244 containerd[1575]: time="2026-04-17T23:52:01.990221560Z" level=error msg="encountered an error cleaning up failed sandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.990333 containerd[1575]: time="2026-04-17T23:52:01.990318425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbd55db78-5z6c4,Uid:e8c020ef-0550-43e0-8dde-85cd19073ed7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.990754 kubelet[2677]: E0417 23:52:01.990727 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:01.990858 kubelet[2677]: E0417 23:52:01.990847 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cbd55db78-5z6c4" Apr 17 23:52:01.990912 kubelet[2677]: E0417 23:52:01.990902 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cbd55db78-5z6c4" Apr 17 23:52:01.991676 kubelet[2677]: E0417 23:52:01.991055 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cbd55db78-5z6c4_calico-system(e8c020ef-0550-43e0-8dde-85cd19073ed7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cbd55db78-5z6c4_calico-system(e8c020ef-0550-43e0-8dde-85cd19073ed7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cbd55db78-5z6c4" podUID="e8c020ef-0550-43e0-8dde-85cd19073ed7" Apr 17 23:52:02.013597 containerd[1575]: time="2026-04-17T23:52:02.012150578Z" level=error msg="Failed to destroy network for sandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.013597 containerd[1575]: time="2026-04-17T23:52:02.012560047Z" level=error msg="encountered an error cleaning up failed sandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.013597 containerd[1575]: time="2026-04-17T23:52:02.012595837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f7b68ff-t4zc8,Uid:ea532de7-7ec9-4f7c-9d00-97d7c422c363,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.015389 kubelet[2677]: E0417 23:52:02.012885 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.015389 kubelet[2677]: E0417 23:52:02.012974 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5b9f7b68ff-t4zc8" Apr 17 23:52:02.015389 kubelet[2677]: E0417 23:52:02.012997 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5b9f7b68ff-t4zc8" Apr 17 23:52:02.015773 kubelet[2677]: E0417 23:52:02.013046 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b9f7b68ff-t4zc8_calico-system(ea532de7-7ec9-4f7c-9d00-97d7c422c363)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b9f7b68ff-t4zc8_calico-system(ea532de7-7ec9-4f7c-9d00-97d7c422c363)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5b9f7b68ff-t4zc8" podUID="ea532de7-7ec9-4f7c-9d00-97d7c422c363" Apr 17 23:52:02.068207 containerd[1575]: time="2026-04-17T23:52:02.068121826Z" level=error msg="Failed to destroy network for sandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.069858 containerd[1575]: time="2026-04-17T23:52:02.069680456Z" level=error msg="encountered an error cleaning up failed sandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.070018 containerd[1575]: time="2026-04-17T23:52:02.069994112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9f98598b-s7zb8,Uid:304b7f98-005c-457c-9b59-72da9a1db780,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.071650 kubelet[2677]: E0417 23:52:02.071392 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.073031 kubelet[2677]: E0417 23:52:02.072311 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d9f98598b-s7zb8" Apr 17 23:52:02.073031 kubelet[2677]: E0417 23:52:02.072417 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d9f98598b-s7zb8" Apr 17 23:52:02.073393 kubelet[2677]: E0417 23:52:02.072564 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d9f98598b-s7zb8_calico-system(304b7f98-005c-457c-9b59-72da9a1db780)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d9f98598b-s7zb8_calico-system(304b7f98-005c-457c-9b59-72da9a1db780)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d9f98598b-s7zb8" podUID="304b7f98-005c-457c-9b59-72da9a1db780" Apr 17 23:52:02.073572 containerd[1575]: time="2026-04-17T23:52:02.073527849Z" level=error msg="Failed to destroy network for sandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.074173 containerd[1575]: time="2026-04-17T23:52:02.074023291Z" level=error msg="encountered an error cleaning up failed sandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.074173 containerd[1575]: time="2026-04-17T23:52:02.074066031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f7b68ff-rghf4,Uid:08184ab9-9f04-4144-ba8e-b4322834631d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.074286 kubelet[2677]: E0417 23:52:02.074227 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:52:02.074286 kubelet[2677]: E0417 23:52:02.074256 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5b9f7b68ff-rghf4" Apr 17 23:52:02.074286 kubelet[2677]: E0417 23:52:02.074272 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5b9f7b68ff-rghf4" Apr 17 23:52:02.074343 kubelet[2677]: E0417 23:52:02.074301 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b9f7b68ff-rghf4_calico-system(08184ab9-9f04-4144-ba8e-b4322834631d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b9f7b68ff-rghf4_calico-system(08184ab9-9f04-4144-ba8e-b4322834631d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5b9f7b68ff-rghf4" podUID="08184ab9-9f04-4144-ba8e-b4322834631d" Apr 17 23:52:02.330844 containerd[1575]: time="2026-04-17T23:52:02.330622430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxr77,Uid:5675cbd7-fdb0-43a9-beed-f1806791852c,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:02.466495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e-shm.mount: Deactivated successfully. Apr 17 23:52:02.546572 kubelet[2677]: I0417 23:52:02.546546 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:02.551901 kubelet[2677]: I0417 23:52:02.551779 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:02.553087 containerd[1575]: time="2026-04-17T23:52:02.552903470Z" level=info msg="StopPodSandbox for \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\"" Apr 17 23:52:02.554399 kubelet[2677]: I0417 23:52:02.554332 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:02.556028 containerd[1575]: time="2026-04-17T23:52:02.555841098Z" level=info msg="StopPodSandbox for \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\"" Apr 17 23:52:02.557870 containerd[1575]: time="2026-04-17T23:52:02.557722201Z" level=info msg="StopPodSandbox for \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\"" Apr 17 23:52:02.558524 containerd[1575]: time="2026-04-17T23:52:02.558296463Z" level=info msg="Ensure that sandbox 237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b in task-service has been cleanup successfully" Apr 17 23:52:02.559619 containerd[1575]: time="2026-04-17T23:52:02.559321731Z" level=info msg="Ensure that sandbox 0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582 in task-service has been cleanup successfully" Apr 17 23:52:02.560737 kubelet[2677]: I0417 23:52:02.560710 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:02.563815 containerd[1575]: time="2026-04-17T23:52:02.563662784Z" level=info msg="StopPodSandbox for \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\"" Apr 17 23:52:02.563902 containerd[1575]: time="2026-04-17T23:52:02.563857715Z" level=info msg="Ensure that sandbox 9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e in task-service has been cleanup successfully" Apr 17 23:52:02.568659 containerd[1575]: time="2026-04-17T23:52:02.568637717Z" level=info msg="Ensure that sandbox 3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793 in task-service has been cleanup successfully" Apr 17 23:52:02.585000 systemd-networkd[1255]: cali03160cfbc1c: Link UP Apr 17 23:52:02.587140 systemd-networkd[1255]: cali03160cfbc1c: Gained carrier Apr 17 23:52:02.606533 kubelet[2677]: I0417 23:52:02.605353 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:02.606849 kubelet[2677]: I0417 23:52:02.606768 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cvbw7" podStartSLOduration=3.745980135 podStartE2EDuration="21.606753872s" podCreationTimestamp="2026-04-17 23:51:41 +0000 UTC" firstStartedPulling="2026-04-17 23:51:42.44714449 +0000 UTC m=+17.220325008" lastFinishedPulling="2026-04-17 23:52:00.307918226 +0000 UTC m=+35.081098745" observedRunningTime="2026-04-17 23:52:02.606262362 +0000 UTC m=+37.379442891" watchObservedRunningTime="2026-04-17 23:52:02.606753872 +0000 UTC m=+37.379934401" Apr 17 23:52:02.610680 containerd[1575]: time="2026-04-17T23:52:02.610657641Z" level=info msg="StopPodSandbox for \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\"" Apr 17 23:52:02.611909 containerd[1575]: time="2026-04-17T23:52:02.611892664Z" level=info msg="Ensure that sandbox 27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e in task-service has been cleanup successfully" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.382 [ERROR][3942] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.406 [INFO][3942] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nxr77-eth0 csi-node-driver- calico-system 5675cbd7-fdb0-43a9-beed-f1806791852c 765 0 2026-04-17 23:51:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nxr77 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali03160cfbc1c [] [] }} ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Namespace="calico-system" Pod="csi-node-driver-nxr77" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxr77-" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.406 [INFO][3942] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Namespace="calico-system" Pod="csi-node-driver-nxr77" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxr77-eth0" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.464 [INFO][3956] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" HandleID="k8s-pod-network.a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Workload="localhost-k8s-csi--node--driver--nxr77-eth0" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.477 [INFO][3956] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" HandleID="k8s-pod-network.a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Workload="localhost-k8s-csi--node--driver--nxr77-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nxr77", "timestamp":"2026-04-17 23:52:02.46479814 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017a2c0)} Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.477 [INFO][3956] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.477 [INFO][3956] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.477 [INFO][3956] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.482 [INFO][3956] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" host="localhost" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.517 [INFO][3956] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.526 [INFO][3956] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.528 [INFO][3956] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.530 [INFO][3956] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.530 [INFO][3956] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" host="localhost" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.532 [INFO][3956] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129 Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.537 [INFO][3956] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" host="localhost" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.546 [INFO][3956] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" host="localhost" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.546 [INFO][3956] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" host="localhost" Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.546 [INFO][3956] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:02.612307 containerd[1575]: 2026-04-17 23:52:02.546 [INFO][3956] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" HandleID="k8s-pod-network.a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Workload="localhost-k8s-csi--node--driver--nxr77-eth0" Apr 17 23:52:02.612826 containerd[1575]: 2026-04-17 23:52:02.550 [INFO][3942] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Namespace="calico-system" Pod="csi-node-driver-nxr77" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxr77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxr77-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5675cbd7-fdb0-43a9-beed-f1806791852c", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nxr77", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03160cfbc1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:02.612826 containerd[1575]: 2026-04-17 23:52:02.551 [INFO][3942] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Namespace="calico-system" Pod="csi-node-driver-nxr77" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxr77-eth0" Apr 17 23:52:02.612826 containerd[1575]: 2026-04-17 23:52:02.551 [INFO][3942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03160cfbc1c ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Namespace="calico-system" Pod="csi-node-driver-nxr77" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxr77-eth0" Apr 17 23:52:02.612826 containerd[1575]: 2026-04-17 23:52:02.590 [INFO][3942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Namespace="calico-system" Pod="csi-node-driver-nxr77" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxr77-eth0" Apr 17 23:52:02.612826 containerd[1575]: 2026-04-17 23:52:02.590 [INFO][3942] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Namespace="calico-system" Pod="csi-node-driver-nxr77" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxr77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxr77-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5675cbd7-fdb0-43a9-beed-f1806791852c", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129", Pod:"csi-node-driver-nxr77", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03160cfbc1c", MAC:"de:79:68:4f:47:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:02.612826 containerd[1575]: 2026-04-17 23:52:02.608 [INFO][3942] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129" Namespace="calico-system" Pod="csi-node-driver-nxr77" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxr77-eth0" Apr 17 23:52:02.618614 kubelet[2677]: I0417 23:52:02.617865 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:02.626355 containerd[1575]: time="2026-04-17T23:52:02.626221155Z" level=info msg="StopPodSandbox for \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\"" Apr 17 23:52:02.629421 containerd[1575]: time="2026-04-17T23:52:02.629220781Z" level=info msg="Ensure that sandbox 30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca in task-service has been cleanup successfully" Apr 17 23:52:02.641825 kubelet[2677]: I0417 23:52:02.641792 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:02.648420 containerd[1575]: time="2026-04-17T23:52:02.648307132Z" level=info msg="StopPodSandbox for \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\"" Apr 17 23:52:02.648774 containerd[1575]: time="2026-04-17T23:52:02.648704009Z" level=info msg="Ensure that sandbox e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7 in task-service has been cleanup successfully" Apr 17 23:52:02.721590 containerd[1575]: time="2026-04-17T23:52:02.721299209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:02.721590 containerd[1575]: time="2026-04-17T23:52:02.721381581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:02.721590 containerd[1575]: time="2026-04-17T23:52:02.721391092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:02.721773 containerd[1575]: time="2026-04-17T23:52:02.721597522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:02.845037 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.735 [INFO][4000] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.735 [INFO][4000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" iface="eth0" netns="/var/run/netns/cni-b337fbc3-9592-c86b-185f-cff7e25d229b" Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.735 [INFO][4000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" iface="eth0" netns="/var/run/netns/cni-b337fbc3-9592-c86b-185f-cff7e25d229b" Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.736 [INFO][4000] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" iface="eth0" netns="/var/run/netns/cni-b337fbc3-9592-c86b-185f-cff7e25d229b" Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.736 [INFO][4000] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.736 [INFO][4000] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.850 [INFO][4149] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" HandleID="k8s-pod-network.9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.854 [INFO][4149] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.854 [INFO][4149] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.879 [WARNING][4149] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" HandleID="k8s-pod-network.9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.879 [INFO][4149] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" HandleID="k8s-pod-network.9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.883 [INFO][4149] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:02.888169 containerd[1575]: 2026-04-17 23:52:02.884 [INFO][4000] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:02.892510 containerd[1575]: time="2026-04-17T23:52:02.891322315Z" level=info msg="TearDown network for sandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\" successfully" Apr 17 23:52:02.892510 containerd[1575]: time="2026-04-17T23:52:02.891358915Z" level=info msg="StopPodSandbox for \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\" returns successfully" Apr 17 23:52:02.892378 systemd[1]: run-netns-cni\x2db337fbc3\x2d9592\x2dc86b\x2d185f\x2dcff7e25d229b.mount: Deactivated successfully. Apr 17 23:52:02.894033 kubelet[2677]: E0417 23:52:02.893000 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:02.900719 containerd[1575]: time="2026-04-17T23:52:02.900553941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qpwz2,Uid:a23d8751-4902-4d1d-8ccf-8b84b4c25b8b,Namespace:kube-system,Attempt:1,}" Apr 17 23:52:02.961510 containerd[1575]: time="2026-04-17T23:52:02.960019214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxr77,Uid:5675cbd7-fdb0-43a9-beed-f1806791852c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129\"" Apr 17 23:52:02.966107 containerd[1575]: time="2026-04-17T23:52:02.966018009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.822 [INFO][4024] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.824 [INFO][4024] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" iface="eth0" netns="/var/run/netns/cni-713c4677-d8f4-8015-bcbe-88d7dfcb4024" Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.827 [INFO][4024] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" iface="eth0" netns="/var/run/netns/cni-713c4677-d8f4-8015-bcbe-88d7dfcb4024" Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.829 [INFO][4024] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" iface="eth0" netns="/var/run/netns/cni-713c4677-d8f4-8015-bcbe-88d7dfcb4024" Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.829 [INFO][4024] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.829 [INFO][4024] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.930 [INFO][4187] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" HandleID="k8s-pod-network.237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.930 [INFO][4187] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.930 [INFO][4187] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.951 [WARNING][4187] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" HandleID="k8s-pod-network.237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.951 [INFO][4187] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" HandleID="k8s-pod-network.237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.970 [INFO][4187] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.003799 containerd[1575]: 2026-04-17 23:52:02.991 [INFO][4024] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:03.005208 containerd[1575]: time="2026-04-17T23:52:03.004392323Z" level=info msg="TearDown network for sandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\" successfully" Apr 17 23:52:03.005208 containerd[1575]: time="2026-04-17T23:52:03.005185468Z" level=info msg="StopPodSandbox for \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\" returns successfully" Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.842 [INFO][4105] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.842 [INFO][4105] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" iface="eth0" netns="/var/run/netns/cni-bd87368a-6cf8-0549-9b65-97ee892bc581" Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.842 [INFO][4105] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" iface="eth0" netns="/var/run/netns/cni-bd87368a-6cf8-0549-9b65-97ee892bc581" Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.842 [INFO][4105] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" iface="eth0" netns="/var/run/netns/cni-bd87368a-6cf8-0549-9b65-97ee892bc581" Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.842 [INFO][4105] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.842 [INFO][4105] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.962 [INFO][4201] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" HandleID="k8s-pod-network.27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.962 [INFO][4201] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.971 [INFO][4201] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.993 [WARNING][4201] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" HandleID="k8s-pod-network.27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.993 [INFO][4201] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" HandleID="k8s-pod-network.27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:02.997 [INFO][4201] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.005294 containerd[1575]: 2026-04-17 23:52:03.003 [INFO][4105] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:03.006073 containerd[1575]: time="2026-04-17T23:52:03.006021414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9f98598b-s7zb8,Uid:304b7f98-005c-457c-9b59-72da9a1db780,Namespace:calico-system,Attempt:1,}" Apr 17 23:52:03.007257 containerd[1575]: time="2026-04-17T23:52:03.007114614Z" level=info msg="TearDown network for sandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\" successfully" Apr 17 23:52:03.007257 containerd[1575]: time="2026-04-17T23:52:03.007152293Z" level=info msg="StopPodSandbox for \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\" returns successfully" Apr 17 23:52:03.007969 containerd[1575]: time="2026-04-17T23:52:03.007866732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f7b68ff-rghf4,Uid:08184ab9-9f04-4144-ba8e-b4322834631d,Namespace:calico-system,Attempt:1,}" Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:02.911 [INFO][4107] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:02.911 [INFO][4107] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" iface="eth0" netns="/var/run/netns/cni-3b77d9e7-4df0-1abd-4c60-45d049104645" Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:02.912 [INFO][4107] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" iface="eth0" netns="/var/run/netns/cni-3b77d9e7-4df0-1abd-4c60-45d049104645" Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:02.912 [INFO][4107] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" iface="eth0" netns="/var/run/netns/cni-3b77d9e7-4df0-1abd-4c60-45d049104645" Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:02.912 [INFO][4107] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:02.912 [INFO][4107] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:02.996 [INFO][4213] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" HandleID="k8s-pod-network.30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:02.997 [INFO][4213] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:02.997 [INFO][4213] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:03.014 [WARNING][4213] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" HandleID="k8s-pod-network.30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:03.014 [INFO][4213] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" HandleID="k8s-pod-network.30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:03.020 [INFO][4213] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.030950 containerd[1575]: 2026-04-17 23:52:03.026 [INFO][4107] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:03.032031 containerd[1575]: time="2026-04-17T23:52:03.031606611Z" level=info msg="TearDown network for sandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\" successfully" Apr 17 23:52:03.032031 containerd[1575]: time="2026-04-17T23:52:03.031627691Z" level=info msg="StopPodSandbox for \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\" returns successfully" Apr 17 23:52:03.033220 containerd[1575]: time="2026-04-17T23:52:03.033203084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f7b68ff-t4zc8,Uid:ea532de7-7ec9-4f7c-9d00-97d7c422c363,Namespace:calico-system,Attempt:1,}" Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:02.788 [INFO][4040] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:02.789 [INFO][4040] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" iface="eth0" netns="/var/run/netns/cni-615fab96-8a55-c12a-ac47-5a48dba0e264" Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:02.789 [INFO][4040] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" iface="eth0" netns="/var/run/netns/cni-615fab96-8a55-c12a-ac47-5a48dba0e264" Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:02.790 [INFO][4040] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" iface="eth0" netns="/var/run/netns/cni-615fab96-8a55-c12a-ac47-5a48dba0e264" Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:02.790 [INFO][4040] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:02.790 [INFO][4040] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:02.996 [INFO][4174] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" HandleID="k8s-pod-network.3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:02.997 [INFO][4174] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:03.020 [INFO][4174] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:03.034 [WARNING][4174] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" HandleID="k8s-pod-network.3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:03.036 [INFO][4174] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" HandleID="k8s-pod-network.3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:03.043 [INFO][4174] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.073160 containerd[1575]: 2026-04-17 23:52:03.065 [INFO][4040] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:03.073672 containerd[1575]: time="2026-04-17T23:52:03.073366688Z" level=info msg="TearDown network for sandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\" successfully" Apr 17 23:52:03.073672 containerd[1575]: time="2026-04-17T23:52:03.073392226Z" level=info msg="StopPodSandbox for \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\" returns successfully" Apr 17 23:52:03.074599 containerd[1575]: time="2026-04-17T23:52:03.074314725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xtkhw,Uid:be1918a4-9e90-448a-95bf-d09779e58ce9,Namespace:calico-system,Attempt:1,}" Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:02.876 [INFO][4106] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:02.879 [INFO][4106] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" iface="eth0" netns="/var/run/netns/cni-47fd86f7-9b6b-842a-c2b7-56f7cd72cc3a" Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:02.882 [INFO][4106] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" iface="eth0" netns="/var/run/netns/cni-47fd86f7-9b6b-842a-c2b7-56f7cd72cc3a" Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:02.901 [INFO][4106] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" iface="eth0" netns="/var/run/netns/cni-47fd86f7-9b6b-842a-c2b7-56f7cd72cc3a" Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:02.902 [INFO][4106] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:02.902 [INFO][4106] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:02.999 [INFO][4210] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" HandleID="k8s-pod-network.e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:03.000 [INFO][4210] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:03.044 [INFO][4210] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:03.060 [WARNING][4210] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" HandleID="k8s-pod-network.e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:03.060 [INFO][4210] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" HandleID="k8s-pod-network.e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:03.070 [INFO][4210] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.085101 containerd[1575]: 2026-04-17 23:52:03.082 [INFO][4106] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:03.088125 containerd[1575]: time="2026-04-17T23:52:03.087679502Z" level=info msg="TearDown network for sandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\" successfully" Apr 17 23:52:03.088125 containerd[1575]: time="2026-04-17T23:52:03.087728211Z" level=info msg="StopPodSandbox for \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\" returns successfully" Apr 17 23:52:03.088405 kubelet[2677]: E0417 23:52:03.088342 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:03.092517 containerd[1575]: time="2026-04-17T23:52:03.092413953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k6w8b,Uid:db5c2130-0e95-4916-badc-e8ed1ee5a320,Namespace:kube-system,Attempt:1,}" Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:02.802 [INFO][4034] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:02.802 [INFO][4034] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" iface="eth0" netns="/var/run/netns/cni-50be0234-3eec-9268-ea9e-483458b4981d" Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:02.802 [INFO][4034] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" iface="eth0" netns="/var/run/netns/cni-50be0234-3eec-9268-ea9e-483458b4981d" Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:02.829 [INFO][4034] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" iface="eth0" netns="/var/run/netns/cni-50be0234-3eec-9268-ea9e-483458b4981d" Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:02.829 [INFO][4034] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:02.829 [INFO][4034] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:03.001 [INFO][4189] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" HandleID="k8s-pod-network.0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Workload="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:03.003 [INFO][4189] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:03.069 [INFO][4189] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:03.092 [WARNING][4189] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" HandleID="k8s-pod-network.0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Workload="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:03.092 [INFO][4189] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" HandleID="k8s-pod-network.0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Workload="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:03.095 [INFO][4189] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.105431 containerd[1575]: 2026-04-17 23:52:03.102 [INFO][4034] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:03.112020 containerd[1575]: time="2026-04-17T23:52:03.111990641Z" level=info msg="TearDown network for sandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\" successfully" Apr 17 23:52:03.112096 containerd[1575]: time="2026-04-17T23:52:03.112087902Z" level=info msg="StopPodSandbox for \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\" returns successfully" Apr 17 23:52:03.215829 systemd-networkd[1255]: caliad3eb5973e5: Link UP Apr 17 23:52:03.216089 systemd-networkd[1255]: caliad3eb5973e5: Gained carrier Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.004 [ERROR][4232] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.031 [INFO][4232] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0 coredns-674b8bbfcf- kube-system a23d8751-4902-4d1d-8ccf-8b84b4c25b8b 1006 0 2026-04-17 23:51:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qpwz2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad3eb5973e5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-qpwz2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qpwz2-" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.031 [INFO][4232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-qpwz2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.115 [INFO][4279] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" HandleID="k8s-pod-network.7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.131 [INFO][4279] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" HandleID="k8s-pod-network.7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013bbd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qpwz2", "timestamp":"2026-04-17 23:52:03.11574872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000216c60)} Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.131 [INFO][4279] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.131 [INFO][4279] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.131 [INFO][4279] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.137 [INFO][4279] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" host="localhost" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.150 [INFO][4279] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.162 [INFO][4279] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.165 [INFO][4279] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.169 [INFO][4279] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.169 [INFO][4279] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" host="localhost" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.172 [INFO][4279] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.181 [INFO][4279] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" host="localhost" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.195 [INFO][4279] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" host="localhost" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.197 [INFO][4279] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" host="localhost" Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.197 [INFO][4279] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.247685 containerd[1575]: 2026-04-17 23:52:03.197 [INFO][4279] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" HandleID="k8s-pod-network.7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:03.248397 containerd[1575]: 2026-04-17 23:52:03.211 [INFO][4232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-qpwz2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a23d8751-4902-4d1d-8ccf-8b84b4c25b8b", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qpwz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad3eb5973e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:03.248397 containerd[1575]: 2026-04-17 23:52:03.211 [INFO][4232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-qpwz2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:03.248397 containerd[1575]: 2026-04-17 23:52:03.211 [INFO][4232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad3eb5973e5 ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-qpwz2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:03.248397 containerd[1575]: 2026-04-17 23:52:03.217 [INFO][4232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-qpwz2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:03.248397 containerd[1575]: 2026-04-17 23:52:03.220 [INFO][4232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-qpwz2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a23d8751-4902-4d1d-8ccf-8b84b4c25b8b", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa", Pod:"coredns-674b8bbfcf-qpwz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad3eb5973e5", MAC:"0a:bd:ec:35:69:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:03.248397 containerd[1575]: 2026-04-17 23:52:03.240 [INFO][4232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-qpwz2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:03.269835 kubelet[2677]: I0417 23:52:03.269166 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8c020ef-0550-43e0-8dde-85cd19073ed7-whisker-backend-key-pair\") pod \"e8c020ef-0550-43e0-8dde-85cd19073ed7\" (UID: \"e8c020ef-0550-43e0-8dde-85cd19073ed7\") " Apr 17 23:52:03.269835 kubelet[2677]: I0417 23:52:03.269201 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e8c020ef-0550-43e0-8dde-85cd19073ed7-nginx-config\") pod \"e8c020ef-0550-43e0-8dde-85cd19073ed7\" (UID: \"e8c020ef-0550-43e0-8dde-85cd19073ed7\") " Apr 17 23:52:03.269835 kubelet[2677]: I0417 23:52:03.269216 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c020ef-0550-43e0-8dde-85cd19073ed7-whisker-ca-bundle\") pod \"e8c020ef-0550-43e0-8dde-85cd19073ed7\" (UID: \"e8c020ef-0550-43e0-8dde-85cd19073ed7\") " Apr 17 23:52:03.269835 kubelet[2677]: I0417 23:52:03.269237 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhhpk\" (UniqueName: \"kubernetes.io/projected/e8c020ef-0550-43e0-8dde-85cd19073ed7-kube-api-access-hhhpk\") pod \"e8c020ef-0550-43e0-8dde-85cd19073ed7\" (UID: \"e8c020ef-0550-43e0-8dde-85cd19073ed7\") " Apr 17 23:52:03.272131 kubelet[2677]: I0417 23:52:03.271574 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c020ef-0550-43e0-8dde-85cd19073ed7-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "e8c020ef-0550-43e0-8dde-85cd19073ed7" (UID: "e8c020ef-0550-43e0-8dde-85cd19073ed7"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:52:03.272131 kubelet[2677]: I0417 23:52:03.272054 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c020ef-0550-43e0-8dde-85cd19073ed7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e8c020ef-0550-43e0-8dde-85cd19073ed7" (UID: "e8c020ef-0550-43e0-8dde-85cd19073ed7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:52:03.275120 kubelet[2677]: I0417 23:52:03.275063 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c020ef-0550-43e0-8dde-85cd19073ed7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e8c020ef-0550-43e0-8dde-85cd19073ed7" (UID: "e8c020ef-0550-43e0-8dde-85cd19073ed7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:52:03.275296 kubelet[2677]: I0417 23:52:03.275240 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c020ef-0550-43e0-8dde-85cd19073ed7-kube-api-access-hhhpk" (OuterVolumeSpecName: "kube-api-access-hhhpk") pod "e8c020ef-0550-43e0-8dde-85cd19073ed7" (UID: "e8c020ef-0550-43e0-8dde-85cd19073ed7"). InnerVolumeSpecName "kube-api-access-hhhpk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:52:03.293891 containerd[1575]: time="2026-04-17T23:52:03.293564097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:03.293891 containerd[1575]: time="2026-04-17T23:52:03.293648676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:03.293891 containerd[1575]: time="2026-04-17T23:52:03.293660170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:03.293891 containerd[1575]: time="2026-04-17T23:52:03.293729770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:03.342474 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:52:03.346983 systemd-networkd[1255]: caliea2d562a022: Link UP Apr 17 23:52:03.349245 systemd-networkd[1255]: caliea2d562a022: Gained carrier Apr 17 23:52:03.384021 kubelet[2677]: I0417 23:52:03.382993 2677 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hhhpk\" (UniqueName: \"kubernetes.io/projected/e8c020ef-0550-43e0-8dde-85cd19073ed7-kube-api-access-hhhpk\") on node \"localhost\" DevicePath \"\"" Apr 17 23:52:03.384021 kubelet[2677]: I0417 23:52:03.383068 2677 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8c020ef-0550-43e0-8dde-85cd19073ed7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 17 23:52:03.384021 kubelet[2677]: I0417 23:52:03.383120 2677 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e8c020ef-0550-43e0-8dde-85cd19073ed7-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 17 23:52:03.384021 kubelet[2677]: I0417 23:52:03.383126 2677 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c020ef-0550-43e0-8dde-85cd19073ed7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.129 [ERROR][4256] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.156 [INFO][4256] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0 calico-kube-controllers-d9f98598b- calico-system 304b7f98-005c-457c-9b59-72da9a1db780 1008 0 2026-04-17 23:51:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d9f98598b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d9f98598b-s7zb8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliea2d562a022 [] [] }} ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Namespace="calico-system" Pod="calico-kube-controllers-d9f98598b-s7zb8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.156 [INFO][4256] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Namespace="calico-system" Pod="calico-kube-controllers-d9f98598b-s7zb8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.292 [INFO][4331] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" HandleID="k8s-pod-network.5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.310 [INFO][4331] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" HandleID="k8s-pod-network.5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000117bc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d9f98598b-s7zb8", "timestamp":"2026-04-17 23:52:03.292019511 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a31e0)} Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.310 [INFO][4331] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.310 [INFO][4331] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.310 [INFO][4331] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.314 [INFO][4331] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" host="localhost" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.319 [INFO][4331] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.325 [INFO][4331] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.326 [INFO][4331] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.328 [INFO][4331] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.329 [INFO][4331] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" host="localhost" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.330 [INFO][4331] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02 Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.334 [INFO][4331] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" host="localhost" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.342 [INFO][4331] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" host="localhost" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.342 [INFO][4331] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" host="localhost" Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.342 [INFO][4331] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.432844 containerd[1575]: 2026-04-17 23:52:03.342 [INFO][4331] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" HandleID="k8s-pod-network.5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:03.433501 containerd[1575]: 2026-04-17 23:52:03.344 [INFO][4256] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Namespace="calico-system" Pod="calico-kube-controllers-d9f98598b-s7zb8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0", GenerateName:"calico-kube-controllers-d9f98598b-", Namespace:"calico-system", SelfLink:"", UID:"304b7f98-005c-457c-9b59-72da9a1db780", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9f98598b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d9f98598b-s7zb8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea2d562a022", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:03.433501 containerd[1575]: 2026-04-17 23:52:03.344 [INFO][4256] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Namespace="calico-system" Pod="calico-kube-controllers-d9f98598b-s7zb8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:03.433501 containerd[1575]: 2026-04-17 23:52:03.344 [INFO][4256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea2d562a022 ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Namespace="calico-system" Pod="calico-kube-controllers-d9f98598b-s7zb8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:03.433501 containerd[1575]: 2026-04-17 23:52:03.354 [INFO][4256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Namespace="calico-system" Pod="calico-kube-controllers-d9f98598b-s7zb8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:03.433501 containerd[1575]: 2026-04-17 23:52:03.357 [INFO][4256] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Namespace="calico-system" Pod="calico-kube-controllers-d9f98598b-s7zb8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0", GenerateName:"calico-kube-controllers-d9f98598b-", Namespace:"calico-system", SelfLink:"", UID:"304b7f98-005c-457c-9b59-72da9a1db780", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9f98598b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02", Pod:"calico-kube-controllers-d9f98598b-s7zb8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea2d562a022", MAC:"de:83:7e:27:6f:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:03.433501 containerd[1575]: 2026-04-17 23:52:03.430 [INFO][4256] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02" Namespace="calico-system" Pod="calico-kube-controllers-d9f98598b-s7zb8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:03.442385 containerd[1575]: time="2026-04-17T23:52:03.442317257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qpwz2,Uid:a23d8751-4902-4d1d-8ccf-8b84b4c25b8b,Namespace:kube-system,Attempt:1,} returns sandbox id \"7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa\"" Apr 17 23:52:03.444574 kubelet[2677]: E0417 23:52:03.444545 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:03.450611 containerd[1575]: time="2026-04-17T23:52:03.450519117Z" level=info msg="CreateContainer within sandbox \"7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:52:03.466050 systemd[1]: run-netns-cni\x2d713c4677\x2dd8f4\x2d8015\x2dbcbe\x2d88d7dfcb4024.mount: Deactivated successfully. Apr 17 23:52:03.466288 systemd[1]: run-netns-cni\x2dbd87368a\x2d6cf8\x2d0549\x2d9b65\x2d97ee892bc581.mount: Deactivated successfully. Apr 17 23:52:03.466348 systemd[1]: run-netns-cni\x2d3b77d9e7\x2d4df0\x2d1abd\x2d4c60\x2d45d049104645.mount: Deactivated successfully. Apr 17 23:52:03.466406 systemd[1]: run-netns-cni\x2d50be0234\x2d3eec\x2d9268\x2dea9e\x2d483458b4981d.mount: Deactivated successfully. Apr 17 23:52:03.466756 systemd[1]: run-netns-cni\x2d615fab96\x2d8a55\x2dc12a\x2dac47\x2d5a48dba0e264.mount: Deactivated successfully. Apr 17 23:52:03.466824 systemd[1]: run-netns-cni\x2d47fd86f7\x2d9b6b\x2d842a\x2dc2b7\x2d56f7cd72cc3a.mount: Deactivated successfully. Apr 17 23:52:03.466880 systemd[1]: var-lib-kubelet-pods-e8c020ef\x2d0550\x2d43e0\x2d8dde\x2d85cd19073ed7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhhhpk.mount: Deactivated successfully. Apr 17 23:52:03.466998 systemd[1]: var-lib-kubelet-pods-e8c020ef\x2d0550\x2d43e0\x2d8dde\x2d85cd19073ed7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:52:03.483373 containerd[1575]: time="2026-04-17T23:52:03.483082348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:03.483373 containerd[1575]: time="2026-04-17T23:52:03.483147839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:03.483373 containerd[1575]: time="2026-04-17T23:52:03.483160262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:03.483373 containerd[1575]: time="2026-04-17T23:52:03.483242097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:03.483622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533243902.mount: Deactivated successfully. Apr 17 23:52:03.489035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692613818.mount: Deactivated successfully. Apr 17 23:52:03.490542 containerd[1575]: time="2026-04-17T23:52:03.490393942Z" level=info msg="CreateContainer within sandbox \"7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94d59a231d5283c864670c7e5af0ad92593e017b4e6342c3b8d09fa94af3013d\"" Apr 17 23:52:03.492500 containerd[1575]: time="2026-04-17T23:52:03.491763485Z" level=info msg="StartContainer for \"94d59a231d5283c864670c7e5af0ad92593e017b4e6342c3b8d09fa94af3013d\"" Apr 17 23:52:03.498096 systemd-networkd[1255]: cali90255fd1f65: Link UP Apr 17 23:52:03.499092 systemd-networkd[1255]: cali90255fd1f65: Gained carrier Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.179 [ERROR][4315] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.224 [INFO][4315] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0 coredns-674b8bbfcf- kube-system db5c2130-0e95-4916-badc-e8ed1ee5a320 1012 0 2026-04-17 23:51:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-k6w8b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali90255fd1f65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k6w8b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k6w8b-" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.224 [INFO][4315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k6w8b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.304 [INFO][4359] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" HandleID="k8s-pod-network.d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.318 [INFO][4359] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" HandleID="k8s-pod-network.d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f82e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-k6w8b", "timestamp":"2026-04-17 23:52:03.304431872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00030e000)} Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.318 [INFO][4359] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.342 [INFO][4359] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.342 [INFO][4359] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.423 [INFO][4359] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" host="localhost" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.429 [INFO][4359] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.441 [INFO][4359] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.445 [INFO][4359] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.448 [INFO][4359] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.448 [INFO][4359] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" host="localhost" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.454 [INFO][4359] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.468 [INFO][4359] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" host="localhost" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.478 [INFO][4359] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" host="localhost" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.479 [INFO][4359] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" host="localhost" Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.479 [INFO][4359] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.521758 containerd[1575]: 2026-04-17 23:52:03.479 [INFO][4359] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" HandleID="k8s-pod-network.d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:03.525652 containerd[1575]: 2026-04-17 23:52:03.491 [INFO][4315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k6w8b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"db5c2130-0e95-4916-badc-e8ed1ee5a320", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-k6w8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90255fd1f65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:03.525652 containerd[1575]: 2026-04-17 23:52:03.492 [INFO][4315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k6w8b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:03.525652 containerd[1575]: 2026-04-17 23:52:03.492 [INFO][4315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90255fd1f65 ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k6w8b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:03.525652 containerd[1575]: 2026-04-17 23:52:03.499 [INFO][4315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k6w8b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:03.525652 containerd[1575]: 2026-04-17 23:52:03.500 [INFO][4315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k6w8b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"db5c2130-0e95-4916-badc-e8ed1ee5a320", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c", Pod:"coredns-674b8bbfcf-k6w8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90255fd1f65", MAC:"ae:55:b8:aa:a5:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:03.525652 containerd[1575]: 2026-04-17 23:52:03.518 [INFO][4315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k6w8b" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:03.530658 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:52:03.611551 containerd[1575]: time="2026-04-17T23:52:03.605308077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:03.611551 containerd[1575]: time="2026-04-17T23:52:03.605396320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:03.611551 containerd[1575]: time="2026-04-17T23:52:03.605417524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:03.611551 containerd[1575]: time="2026-04-17T23:52:03.605532653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:03.665082 containerd[1575]: time="2026-04-17T23:52:03.664850371Z" level=info msg="StartContainer for \"94d59a231d5283c864670c7e5af0ad92593e017b4e6342c3b8d09fa94af3013d\" returns successfully" Apr 17 23:52:03.689667 kubelet[2677]: E0417 23:52:03.689027 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:03.726502 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:52:03.746638 kubelet[2677]: I0417 23:52:03.746486 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qpwz2" podStartSLOduration=33.746427426 podStartE2EDuration="33.746427426s" podCreationTimestamp="2026-04-17 23:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:52:03.706807065 +0000 UTC m=+38.479987586" watchObservedRunningTime="2026-04-17 23:52:03.746427426 +0000 UTC m=+38.519607952" Apr 17 23:52:03.764687 containerd[1575]: time="2026-04-17T23:52:03.764653257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9f98598b-s7zb8,Uid:304b7f98-005c-457c-9b59-72da9a1db780,Namespace:calico-system,Attempt:1,} returns sandbox id \"5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02\"" Apr 17 23:52:03.824823 systemd-networkd[1255]: calibbda5b7e062: Link UP Apr 17 23:52:03.830250 systemd-networkd[1255]: calibbda5b7e062: Gained carrier Apr 17 23:52:03.834370 containerd[1575]: time="2026-04-17T23:52:03.833105218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k6w8b,Uid:db5c2130-0e95-4916-badc-e8ed1ee5a320,Namespace:kube-system,Attempt:1,} returns sandbox id \"d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c\"" Apr 17 23:52:03.840353 kubelet[2677]: E0417 23:52:03.839235 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:03.850249 containerd[1575]: time="2026-04-17T23:52:03.850220687Z" level=info msg="CreateContainer within sandbox \"d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.142 [ERROR][4289] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.169 [INFO][4289] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0 calico-apiserver-5b9f7b68ff- calico-system ea532de7-7ec9-4f7c-9d00-97d7c422c363 1013 0 2026-04-17 23:51:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b9f7b68ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b9f7b68ff-t4zc8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibbda5b7e062 [] [] }} ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-t4zc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.170 [INFO][4289] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-t4zc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.306 [INFO][4346] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" HandleID="k8s-pod-network.bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.318 [INFO][4346] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" HandleID="k8s-pod-network.bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5b9f7b68ff-t4zc8", "timestamp":"2026-04-17 23:52:03.306351097 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f2f20)} Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.318 [INFO][4346] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.479 [INFO][4346] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.479 [INFO][4346] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.525 [INFO][4346] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" host="localhost" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.533 [INFO][4346] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.553 [INFO][4346] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.564 [INFO][4346] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.572 [INFO][4346] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.572 [INFO][4346] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" host="localhost" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.577 [INFO][4346] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.590 [INFO][4346] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" host="localhost" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.608 [INFO][4346] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" host="localhost" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.624 [INFO][4346] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" host="localhost" Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.627 [INFO][4346] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.858040 containerd[1575]: 2026-04-17 23:52:03.627 [INFO][4346] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" HandleID="k8s-pod-network.bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:03.858662 containerd[1575]: 2026-04-17 23:52:03.686 [INFO][4289] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-t4zc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0", GenerateName:"calico-apiserver-5b9f7b68ff-", Namespace:"calico-system", SelfLink:"", UID:"ea532de7-7ec9-4f7c-9d00-97d7c422c363", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f7b68ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b9f7b68ff-t4zc8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibbda5b7e062", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:03.858662 containerd[1575]: 2026-04-17 23:52:03.687 [INFO][4289] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-t4zc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:03.858662 containerd[1575]: 2026-04-17 23:52:03.687 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbda5b7e062 ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-t4zc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:03.858662 containerd[1575]: 2026-04-17 23:52:03.829 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-t4zc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:03.858662 containerd[1575]: 2026-04-17 23:52:03.829 [INFO][4289] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-t4zc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0", GenerateName:"calico-apiserver-5b9f7b68ff-", Namespace:"calico-system", SelfLink:"", UID:"ea532de7-7ec9-4f7c-9d00-97d7c422c363", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f7b68ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae", Pod:"calico-apiserver-5b9f7b68ff-t4zc8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibbda5b7e062", MAC:"06:6a:17:4e:d5:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:03.858662 containerd[1575]: 2026-04-17 23:52:03.848 [INFO][4289] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-t4zc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:03.899803 kubelet[2677]: I0417 23:52:03.897811 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6s6t\" (UniqueName: \"kubernetes.io/projected/f1fa0b13-6cfd-48c1-b1c5-5814b975204e-kube-api-access-s6s6t\") pod \"whisker-789694cb9f-nxp2s\" (UID: \"f1fa0b13-6cfd-48c1-b1c5-5814b975204e\") " pod="calico-system/whisker-789694cb9f-nxp2s" Apr 17 23:52:03.899803 kubelet[2677]: I0417 23:52:03.898002 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f1fa0b13-6cfd-48c1-b1c5-5814b975204e-nginx-config\") pod \"whisker-789694cb9f-nxp2s\" (UID: \"f1fa0b13-6cfd-48c1-b1c5-5814b975204e\") " pod="calico-system/whisker-789694cb9f-nxp2s" Apr 17 23:52:03.899803 kubelet[2677]: I0417 23:52:03.898038 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1fa0b13-6cfd-48c1-b1c5-5814b975204e-whisker-backend-key-pair\") pod \"whisker-789694cb9f-nxp2s\" (UID: \"f1fa0b13-6cfd-48c1-b1c5-5814b975204e\") " pod="calico-system/whisker-789694cb9f-nxp2s" Apr 17 23:52:03.899803 kubelet[2677]: I0417 23:52:03.898051 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1fa0b13-6cfd-48c1-b1c5-5814b975204e-whisker-ca-bundle\") pod \"whisker-789694cb9f-nxp2s\" (UID: \"f1fa0b13-6cfd-48c1-b1c5-5814b975204e\") " pod="calico-system/whisker-789694cb9f-nxp2s" Apr 17 23:52:03.898982 systemd-networkd[1255]: cali601020e3276: Link UP Apr 17 23:52:03.902837 systemd-networkd[1255]: cali601020e3276: Gained carrier Apr 17 23:52:03.917235 containerd[1575]: time="2026-04-17T23:52:03.917162107Z" level=info msg="CreateContainer within sandbox \"d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"960a61968d0c5cd07fbfc65ee03f76e8821e088c156e3074fa3bd0909f26a513\"" Apr 17 23:52:03.922293 containerd[1575]: time="2026-04-17T23:52:03.920799243Z" level=info msg="StartContainer for \"960a61968d0c5cd07fbfc65ee03f76e8821e088c156e3074fa3bd0909f26a513\"" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.154 [ERROR][4268] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.165 [INFO][4268] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0 calico-apiserver-5b9f7b68ff- calico-system 08184ab9-9f04-4144-ba8e-b4322834631d 1011 0 2026-04-17 23:51:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b9f7b68ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b9f7b68ff-rghf4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali601020e3276 [] [] }} ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-rghf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.165 [INFO][4268] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-rghf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.307 [INFO][4334] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" HandleID="k8s-pod-network.69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.322 [INFO][4334] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" HandleID="k8s-pod-network.69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5b9f7b68ff-rghf4", "timestamp":"2026-04-17 23:52:03.307243131 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000d0dc0)} Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.322 [INFO][4334] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.632 [INFO][4334] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.633 [INFO][4334] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.647 [INFO][4334] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" host="localhost" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.757 [INFO][4334] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.824 [INFO][4334] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.836 [INFO][4334] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.848 [INFO][4334] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.849 [INFO][4334] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" host="localhost" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.859 [INFO][4334] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.868 [INFO][4334] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" host="localhost" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.881 [INFO][4334] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" host="localhost" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.881 [INFO][4334] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" host="localhost" Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.881 [INFO][4334] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:03.934494 containerd[1575]: 2026-04-17 23:52:03.882 [INFO][4334] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" HandleID="k8s-pod-network.69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:03.935032 containerd[1575]: 2026-04-17 23:52:03.887 [INFO][4268] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-rghf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0", GenerateName:"calico-apiserver-5b9f7b68ff-", Namespace:"calico-system", SelfLink:"", UID:"08184ab9-9f04-4144-ba8e-b4322834631d", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f7b68ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b9f7b68ff-rghf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali601020e3276", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:03.935032 containerd[1575]: 2026-04-17 23:52:03.887 [INFO][4268] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-rghf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:03.935032 containerd[1575]: 2026-04-17 23:52:03.887 [INFO][4268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali601020e3276 ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-rghf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:03.935032 containerd[1575]: 2026-04-17 23:52:03.907 [INFO][4268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-rghf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:03.935032 containerd[1575]: 2026-04-17 23:52:03.909 [INFO][4268] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-rghf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0", GenerateName:"calico-apiserver-5b9f7b68ff-", Namespace:"calico-system", SelfLink:"", UID:"08184ab9-9f04-4144-ba8e-b4322834631d", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f7b68ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e", Pod:"calico-apiserver-5b9f7b68ff-rghf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali601020e3276", MAC:"02:44:72:b4:15:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:03.935032 containerd[1575]: 2026-04-17 23:52:03.929 [INFO][4268] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e" Namespace="calico-system" Pod="calico-apiserver-5b9f7b68ff-rghf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:03.953154 containerd[1575]: time="2026-04-17T23:52:03.951484966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:03.953154 containerd[1575]: time="2026-04-17T23:52:03.951535159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:03.953154 containerd[1575]: time="2026-04-17T23:52:03.951585351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:03.953154 containerd[1575]: time="2026-04-17T23:52:03.951764406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:04.006934 containerd[1575]: time="2026-04-17T23:52:04.004016598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:04.006934 containerd[1575]: time="2026-04-17T23:52:04.004058560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:04.006934 containerd[1575]: time="2026-04-17T23:52:04.004067548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:04.006934 containerd[1575]: time="2026-04-17T23:52:04.004202048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:04.020758 systemd-networkd[1255]: cali3f3169b5a6d: Link UP Apr 17 23:52:04.022676 systemd-networkd[1255]: cali3f3169b5a6d: Gained carrier Apr 17 23:52:04.032515 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.233 [ERROR][4305] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.272 [INFO][4305] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--xtkhw-eth0 goldmane-5b85766d88- calico-system be1918a4-9e90-448a-95bf-d09779e58ce9 1007 0 2026-04-17 23:51:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-xtkhw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3f3169b5a6d [] [] }} ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Namespace="calico-system" Pod="goldmane-5b85766d88-xtkhw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xtkhw-" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.273 [INFO][4305] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Namespace="calico-system" Pod="goldmane-5b85766d88-xtkhw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.346 [INFO][4390] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" HandleID="k8s-pod-network.6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.360 [INFO][4390] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" HandleID="k8s-pod-network.6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003887b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-xtkhw", "timestamp":"2026-04-17 23:52:03.346640576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005551e0)} Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.361 [INFO][4390] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.882 [INFO][4390] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.882 [INFO][4390] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.892 [INFO][4390] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" host="localhost" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.905 [INFO][4390] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.931 [INFO][4390] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.937 [INFO][4390] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.945 [INFO][4390] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.946 [INFO][4390] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" host="localhost" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.950 [INFO][4390] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396 Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.956 [INFO][4390] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" host="localhost" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.971 [INFO][4390] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" host="localhost" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.972 [INFO][4390] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" host="localhost" Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.972 [INFO][4390] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:04.054487 containerd[1575]: 2026-04-17 23:52:03.972 [INFO][4390] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" HandleID="k8s-pod-network.6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:04.059115 containerd[1575]: 2026-04-17 23:52:03.979 [INFO][4305] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Namespace="calico-system" Pod="goldmane-5b85766d88-xtkhw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--xtkhw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"be1918a4-9e90-448a-95bf-d09779e58ce9", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-xtkhw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f3169b5a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:04.059115 containerd[1575]: 2026-04-17 23:52:03.980 [INFO][4305] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Namespace="calico-system" Pod="goldmane-5b85766d88-xtkhw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:04.059115 containerd[1575]: 2026-04-17 23:52:03.980 [INFO][4305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f3169b5a6d ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Namespace="calico-system" Pod="goldmane-5b85766d88-xtkhw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:04.059115 containerd[1575]: 2026-04-17 23:52:04.024 [INFO][4305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Namespace="calico-system" Pod="goldmane-5b85766d88-xtkhw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:04.059115 containerd[1575]: 2026-04-17 23:52:04.024 [INFO][4305] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Namespace="calico-system" Pod="goldmane-5b85766d88-xtkhw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--xtkhw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"be1918a4-9e90-448a-95bf-d09779e58ce9", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396", Pod:"goldmane-5b85766d88-xtkhw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f3169b5a6d", MAC:"4e:cd:19:d1:85:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:04.059115 containerd[1575]: 2026-04-17 23:52:04.047 [INFO][4305] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396" Namespace="calico-system" Pod="goldmane-5b85766d88-xtkhw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:04.074940 containerd[1575]: time="2026-04-17T23:52:04.072944023Z" level=info msg="StartContainer for \"960a61968d0c5cd07fbfc65ee03f76e8821e088c156e3074fa3bd0909f26a513\" returns successfully" Apr 17 23:52:04.115612 containerd[1575]: time="2026-04-17T23:52:04.115581039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f7b68ff-t4zc8,Uid:ea532de7-7ec9-4f7c-9d00-97d7c422c363,Namespace:calico-system,Attempt:1,} returns sandbox id \"bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae\"" Apr 17 23:52:04.136689 containerd[1575]: time="2026-04-17T23:52:04.136646517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-789694cb9f-nxp2s,Uid:f1fa0b13-6cfd-48c1-b1c5-5814b975204e,Namespace:calico-system,Attempt:0,}" Apr 17 23:52:04.141347 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:52:04.159488 containerd[1575]: time="2026-04-17T23:52:04.159300643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:04.159488 containerd[1575]: time="2026-04-17T23:52:04.159410509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:04.159488 containerd[1575]: time="2026-04-17T23:52:04.159422715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:04.159729 containerd[1575]: time="2026-04-17T23:52:04.159550097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:04.215380 containerd[1575]: time="2026-04-17T23:52:04.215178809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f7b68ff-rghf4,Uid:08184ab9-9f04-4144-ba8e-b4322834631d,Namespace:calico-system,Attempt:1,} returns sandbox id \"69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e\"" Apr 17 23:52:04.235616 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:52:04.293607 kernel: calico-node[4567]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:52:04.294795 containerd[1575]: time="2026-04-17T23:52:04.294740898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xtkhw,Uid:be1918a4-9e90-448a-95bf-d09779e58ce9,Namespace:calico-system,Attempt:1,} returns sandbox id \"6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396\"" Apr 17 23:52:04.318823 systemd-networkd[1255]: cali03160cfbc1c: Gained IPv6LL Apr 17 23:52:04.404574 systemd-networkd[1255]: calic453cdfe11d: Link UP Apr 17 23:52:04.414531 systemd-networkd[1255]: calic453cdfe11d: Gained carrier Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.239 [INFO][4874] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--789694cb9f--nxp2s-eth0 whisker-789694cb9f- calico-system f1fa0b13-6cfd-48c1-b1c5-5814b975204e 1062 0 2026-04-17 23:52:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:789694cb9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-789694cb9f-nxp2s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic453cdfe11d [] [] }} ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Namespace="calico-system" Pod="whisker-789694cb9f-nxp2s" WorkloadEndpoint="localhost-k8s-whisker--789694cb9f--nxp2s-" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.239 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Namespace="calico-system" Pod="whisker-789694cb9f-nxp2s" WorkloadEndpoint="localhost-k8s-whisker--789694cb9f--nxp2s-eth0" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.296 [INFO][4914] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" HandleID="k8s-pod-network.805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Workload="localhost-k8s-whisker--789694cb9f--nxp2s-eth0" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.310 [INFO][4914] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" HandleID="k8s-pod-network.805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Workload="localhost-k8s-whisker--789694cb9f--nxp2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f8140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-789694cb9f-nxp2s", "timestamp":"2026-04-17 23:52:04.296783409 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003289a0)} Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.310 [INFO][4914] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.310 [INFO][4914] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.310 [INFO][4914] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.320 [INFO][4914] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" host="localhost" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.336 [INFO][4914] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.346 [INFO][4914] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.352 [INFO][4914] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.361 [INFO][4914] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.361 [INFO][4914] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" host="localhost" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.370 [INFO][4914] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.382 [INFO][4914] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" host="localhost" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.397 [INFO][4914] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" host="localhost" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.397 [INFO][4914] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" host="localhost" Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.397 [INFO][4914] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:04.432340 containerd[1575]: 2026-04-17 23:52:04.397 [INFO][4914] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" HandleID="k8s-pod-network.805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Workload="localhost-k8s-whisker--789694cb9f--nxp2s-eth0" Apr 17 23:52:04.433011 containerd[1575]: 2026-04-17 23:52:04.400 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Namespace="calico-system" Pod="whisker-789694cb9f-nxp2s" WorkloadEndpoint="localhost-k8s-whisker--789694cb9f--nxp2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--789694cb9f--nxp2s-eth0", GenerateName:"whisker-789694cb9f-", Namespace:"calico-system", SelfLink:"", UID:"f1fa0b13-6cfd-48c1-b1c5-5814b975204e", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"789694cb9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-789694cb9f-nxp2s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic453cdfe11d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:04.433011 containerd[1575]: 2026-04-17 23:52:04.401 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Namespace="calico-system" Pod="whisker-789694cb9f-nxp2s" WorkloadEndpoint="localhost-k8s-whisker--789694cb9f--nxp2s-eth0" Apr 17 23:52:04.433011 containerd[1575]: 2026-04-17 23:52:04.401 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic453cdfe11d ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Namespace="calico-system" Pod="whisker-789694cb9f-nxp2s" WorkloadEndpoint="localhost-k8s-whisker--789694cb9f--nxp2s-eth0" Apr 17 23:52:04.433011 containerd[1575]: 2026-04-17 23:52:04.405 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Namespace="calico-system" Pod="whisker-789694cb9f-nxp2s" WorkloadEndpoint="localhost-k8s-whisker--789694cb9f--nxp2s-eth0" Apr 17 23:52:04.433011 containerd[1575]: 2026-04-17 23:52:04.405 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Namespace="calico-system" Pod="whisker-789694cb9f-nxp2s" WorkloadEndpoint="localhost-k8s-whisker--789694cb9f--nxp2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--789694cb9f--nxp2s-eth0", GenerateName:"whisker-789694cb9f-", Namespace:"calico-system", SelfLink:"", UID:"f1fa0b13-6cfd-48c1-b1c5-5814b975204e", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 52, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"789694cb9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e", Pod:"whisker-789694cb9f-nxp2s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic453cdfe11d", MAC:"16:00:cd:04:b0:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:04.433011 containerd[1575]: 2026-04-17 23:52:04.427 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e" Namespace="calico-system" Pod="whisker-789694cb9f-nxp2s" WorkloadEndpoint="localhost-k8s-whisker--789694cb9f--nxp2s-eth0" Apr 17 23:52:04.501372 containerd[1575]: time="2026-04-17T23:52:04.500763491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:52:04.501372 containerd[1575]: time="2026-04-17T23:52:04.500834397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:52:04.501372 containerd[1575]: time="2026-04-17T23:52:04.500852150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:04.501372 containerd[1575]: time="2026-04-17T23:52:04.500999257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:52:04.510667 systemd-networkd[1255]: caliad3eb5973e5: Gained IPv6LL Apr 17 23:52:04.539569 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:52:04.591019 containerd[1575]: time="2026-04-17T23:52:04.590947289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-789694cb9f-nxp2s,Uid:f1fa0b13-6cfd-48c1-b1c5-5814b975204e,Namespace:calico-system,Attempt:0,} returns sandbox id \"805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e\"" Apr 17 23:52:04.708868 kubelet[2677]: E0417 23:52:04.708829 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:04.724362 kubelet[2677]: E0417 23:52:04.723860 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:04.755585 kubelet[2677]: I0417 23:52:04.754019 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k6w8b" podStartSLOduration=34.753999311 podStartE2EDuration="34.753999311s" podCreationTimestamp="2026-04-17 23:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:52:04.753580451 +0000 UTC m=+39.526760977" watchObservedRunningTime="2026-04-17 23:52:04.753999311 +0000 UTC m=+39.527179841" Apr 17 23:52:04.767302 systemd-networkd[1255]: cali90255fd1f65: Gained IPv6LL Apr 17 23:52:04.917563 systemd-networkd[1255]: vxlan.calico: Link UP Apr 17 23:52:04.917568 systemd-networkd[1255]: vxlan.calico: Gained carrier Apr 17 23:52:04.999068 containerd[1575]: time="2026-04-17T23:52:04.998986793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:05.001142 containerd[1575]: time="2026-04-17T23:52:05.001013372Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:05.001142 containerd[1575]: time="2026-04-17T23:52:05.001068536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:52:05.005176 containerd[1575]: time="2026-04-17T23:52:05.004217610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:05.005176 containerd[1575]: time="2026-04-17T23:52:05.004814213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.038668209s" Apr 17 23:52:05.005176 containerd[1575]: time="2026-04-17T23:52:05.004835363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:52:05.006998 containerd[1575]: time="2026-04-17T23:52:05.006836665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:52:05.017918 containerd[1575]: time="2026-04-17T23:52:05.017351054Z" level=info msg="CreateContainer within sandbox \"a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:52:05.036367 containerd[1575]: time="2026-04-17T23:52:05.036228467Z" level=info msg="CreateContainer within sandbox \"a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7239c07d5c228f432e7625b3331bff9f0eb24ed793f3780900565b3d851935d9\"" Apr 17 23:52:05.037821 containerd[1575]: time="2026-04-17T23:52:05.037694557Z" level=info msg="StartContainer for \"7239c07d5c228f432e7625b3331bff9f0eb24ed793f3780900565b3d851935d9\"" Apr 17 23:52:05.088013 systemd-networkd[1255]: calibbda5b7e062: Gained IPv6LL Apr 17 23:52:05.146987 containerd[1575]: time="2026-04-17T23:52:05.146760344Z" level=info msg="StartContainer for \"7239c07d5c228f432e7625b3331bff9f0eb24ed793f3780900565b3d851935d9\" returns successfully" Apr 17 23:52:05.152604 systemd-networkd[1255]: caliea2d562a022: Gained IPv6LL Apr 17 23:52:05.331931 kubelet[2677]: I0417 23:52:05.330082 2677 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c020ef-0550-43e0-8dde-85cd19073ed7" path="/var/lib/kubelet/pods/e8c020ef-0550-43e0-8dde-85cd19073ed7/volumes" Apr 17 23:52:05.407597 systemd-networkd[1255]: cali3f3169b5a6d: Gained IPv6LL Apr 17 23:52:05.535977 systemd-networkd[1255]: cali601020e3276: Gained IPv6LL Apr 17 23:52:05.724371 kubelet[2677]: E0417 23:52:05.724294 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:05.724371 kubelet[2677]: E0417 23:52:05.724295 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:05.919081 systemd-networkd[1255]: calic453cdfe11d: Gained IPv6LL Apr 17 23:52:06.687254 systemd-networkd[1255]: vxlan.calico: Gained IPv6LL Apr 17 23:52:06.729182 kubelet[2677]: E0417 23:52:06.729087 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:52:06.940207 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:50562.service - OpenSSH per-connection server daemon (10.0.0.1:50562). Apr 17 23:52:06.992793 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 50562 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:06.995536 sshd[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:07.000860 systemd-logind[1553]: New session 9 of user core. Apr 17 23:52:07.008830 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:52:07.271130 sshd[5153]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:07.275620 systemd-logind[1553]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:52:07.275865 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:50562.service: Deactivated successfully. Apr 17 23:52:07.281236 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:52:07.282337 systemd-logind[1553]: Removed session 9. Apr 17 23:52:07.863235 containerd[1575]: time="2026-04-17T23:52:07.863133786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:07.864429 containerd[1575]: time="2026-04-17T23:52:07.864319045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:52:07.865497 containerd[1575]: time="2026-04-17T23:52:07.865322519Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:07.868120 containerd[1575]: time="2026-04-17T23:52:07.867945058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:07.868920 containerd[1575]: time="2026-04-17T23:52:07.868842218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.861265693s" Apr 17 23:52:07.868991 containerd[1575]: time="2026-04-17T23:52:07.868937958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:52:07.871320 containerd[1575]: time="2026-04-17T23:52:07.871238385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:52:07.881147 containerd[1575]: time="2026-04-17T23:52:07.881099337Z" level=info msg="CreateContainer within sandbox \"5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:52:07.899607 containerd[1575]: time="2026-04-17T23:52:07.899420867Z" level=info msg="CreateContainer within sandbox \"5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ef59e96dafe29de456b3301c015b6d128b20fc16b3addc68653d196e81da28fd\"" Apr 17 23:52:07.900826 containerd[1575]: time="2026-04-17T23:52:07.900786567Z" level=info msg="StartContainer for \"ef59e96dafe29de456b3301c015b6d128b20fc16b3addc68653d196e81da28fd\"" Apr 17 23:52:07.984140 containerd[1575]: time="2026-04-17T23:52:07.984037362Z" level=info msg="StartContainer for \"ef59e96dafe29de456b3301c015b6d128b20fc16b3addc68653d196e81da28fd\" returns successfully" Apr 17 23:52:08.757370 kubelet[2677]: I0417 23:52:08.757147 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d9f98598b-s7zb8" podStartSLOduration=22.669994576 podStartE2EDuration="26.757132226s" podCreationTimestamp="2026-04-17 23:51:42 +0000 UTC" firstStartedPulling="2026-04-17 23:52:03.784017479 +0000 UTC m=+38.557197997" lastFinishedPulling="2026-04-17 23:52:07.871155124 +0000 UTC m=+42.644335647" observedRunningTime="2026-04-17 23:52:08.757028255 +0000 UTC m=+43.530208772" watchObservedRunningTime="2026-04-17 23:52:08.757132226 +0000 UTC m=+43.530312753" Apr 17 23:52:10.940180 containerd[1575]: time="2026-04-17T23:52:10.939816602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:10.941666 containerd[1575]: time="2026-04-17T23:52:10.941159850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:52:10.943699 containerd[1575]: time="2026-04-17T23:52:10.943569402Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:10.950286 containerd[1575]: time="2026-04-17T23:52:10.950105264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:10.950573 containerd[1575]: time="2026-04-17T23:52:10.950546140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.079264133s" Apr 17 23:52:10.950630 containerd[1575]: time="2026-04-17T23:52:10.950581074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:52:10.952345 containerd[1575]: time="2026-04-17T23:52:10.952179285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:52:10.958617 containerd[1575]: time="2026-04-17T23:52:10.958563906Z" level=info msg="CreateContainer within sandbox \"bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:52:10.986361 containerd[1575]: time="2026-04-17T23:52:10.986283709Z" level=info msg="CreateContainer within sandbox \"bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d7f8337ae47ad5a857b0bd3081ecda9fb9549b6fa5f7d3243256b54fb420910d\"" Apr 17 23:52:10.987960 containerd[1575]: time="2026-04-17T23:52:10.987718478Z" level=info msg="StartContainer for \"d7f8337ae47ad5a857b0bd3081ecda9fb9549b6fa5f7d3243256b54fb420910d\"" Apr 17 23:52:11.032172 systemd[1]: run-containerd-runc-k8s.io-d7f8337ae47ad5a857b0bd3081ecda9fb9549b6fa5f7d3243256b54fb420910d-runc.Njhm02.mount: Deactivated successfully. Apr 17 23:52:11.071304 containerd[1575]: time="2026-04-17T23:52:11.070744399Z" level=info msg="StartContainer for \"d7f8337ae47ad5a857b0bd3081ecda9fb9549b6fa5f7d3243256b54fb420910d\" returns successfully" Apr 17 23:52:11.356916 containerd[1575]: time="2026-04-17T23:52:11.356596990Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:11.357837 containerd[1575]: time="2026-04-17T23:52:11.357774953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:52:11.359921 containerd[1575]: time="2026-04-17T23:52:11.359674553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 407.441724ms" Apr 17 23:52:11.360685 containerd[1575]: time="2026-04-17T23:52:11.360017752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:52:11.362757 containerd[1575]: time="2026-04-17T23:52:11.362716098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:52:11.366495 containerd[1575]: time="2026-04-17T23:52:11.366184227Z" level=info msg="CreateContainer within sandbox \"69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:52:11.384278 containerd[1575]: time="2026-04-17T23:52:11.384147582Z" level=info msg="CreateContainer within sandbox \"69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dbc4b8f7917c0c5880deaabe0436bbe5538c3ee3095ccf8de8357f8509d17436\"" Apr 17 23:52:11.385368 containerd[1575]: time="2026-04-17T23:52:11.385173745Z" level=info msg="StartContainer for \"dbc4b8f7917c0c5880deaabe0436bbe5538c3ee3095ccf8de8357f8509d17436\"" Apr 17 23:52:11.484674 containerd[1575]: time="2026-04-17T23:52:11.484334900Z" level=info msg="StartContainer for \"dbc4b8f7917c0c5880deaabe0436bbe5538c3ee3095ccf8de8357f8509d17436\" returns successfully" Apr 17 23:52:11.773239 kubelet[2677]: I0417 23:52:11.771524 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5b9f7b68ff-rghf4" podStartSLOduration=24.628837206 podStartE2EDuration="31.771511209s" podCreationTimestamp="2026-04-17 23:51:40 +0000 UTC" firstStartedPulling="2026-04-17 23:52:04.218960993 +0000 UTC m=+38.992141512" lastFinishedPulling="2026-04-17 23:52:11.36163498 +0000 UTC m=+46.134815515" observedRunningTime="2026-04-17 23:52:11.770832423 +0000 UTC m=+46.544012949" watchObservedRunningTime="2026-04-17 23:52:11.771511209 +0000 UTC m=+46.544691738" Apr 17 23:52:12.278794 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:55034.service - OpenSSH per-connection server daemon (10.0.0.1:55034). Apr 17 23:52:12.342765 sshd[5352]: Accepted publickey for core from 10.0.0.1 port 55034 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:12.347016 sshd[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:12.357983 systemd-logind[1553]: New session 10 of user core. Apr 17 23:52:12.364251 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:52:12.696220 sshd[5352]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:12.710289 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:55034.service: Deactivated successfully. Apr 17 23:52:12.721309 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:52:12.732962 systemd-logind[1553]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:52:12.737992 systemd-logind[1553]: Removed session 10. Apr 17 23:52:12.790174 kubelet[2677]: I0417 23:52:12.790092 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:52:13.354958 kubelet[2677]: I0417 23:52:13.354377 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5b9f7b68ff-t4zc8" podStartSLOduration=26.525931447 podStartE2EDuration="33.354349215s" podCreationTimestamp="2026-04-17 23:51:40 +0000 UTC" firstStartedPulling="2026-04-17 23:52:04.123583421 +0000 UTC m=+38.896763939" lastFinishedPulling="2026-04-17 23:52:10.952001171 +0000 UTC m=+45.725181707" observedRunningTime="2026-04-17 23:52:11.807747199 +0000 UTC m=+46.580927721" watchObservedRunningTime="2026-04-17 23:52:13.354349215 +0000 UTC m=+48.127529749" Apr 17 23:52:14.015170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043902764.mount: Deactivated successfully. Apr 17 23:52:14.525569 containerd[1575]: time="2026-04-17T23:52:14.525359229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:14.526320 containerd[1575]: time="2026-04-17T23:52:14.526025878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:52:14.527715 containerd[1575]: time="2026-04-17T23:52:14.527664300Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:14.530618 containerd[1575]: time="2026-04-17T23:52:14.530511158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:14.539408 containerd[1575]: time="2026-04-17T23:52:14.539322169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.176558206s" Apr 17 23:52:14.539408 containerd[1575]: time="2026-04-17T23:52:14.539385372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:52:14.540894 containerd[1575]: time="2026-04-17T23:52:14.540755313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:52:14.547717 containerd[1575]: time="2026-04-17T23:52:14.547599234Z" level=info msg="CreateContainer within sandbox \"6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:52:14.574085 containerd[1575]: time="2026-04-17T23:52:14.574003071Z" level=info msg="CreateContainer within sandbox \"6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"406309f990bd2329c9a771f66360af38b8ffb921960ca367c018d843a7031767\"" Apr 17 23:52:14.574998 containerd[1575]: time="2026-04-17T23:52:14.574950794Z" level=info msg="StartContainer for \"406309f990bd2329c9a771f66360af38b8ffb921960ca367c018d843a7031767\"" Apr 17 23:52:14.663396 containerd[1575]: time="2026-04-17T23:52:14.663338696Z" level=info msg="StartContainer for \"406309f990bd2329c9a771f66360af38b8ffb921960ca367c018d843a7031767\" returns successfully" Apr 17 23:52:15.959064 kubelet[2677]: I0417 23:52:15.958960 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-xtkhw" podStartSLOduration=24.715171462 podStartE2EDuration="34.958937501s" podCreationTimestamp="2026-04-17 23:51:41 +0000 UTC" firstStartedPulling="2026-04-17 23:52:04.296711967 +0000 UTC m=+39.069892489" lastFinishedPulling="2026-04-17 23:52:14.540478006 +0000 UTC m=+49.313658528" observedRunningTime="2026-04-17 23:52:14.829386027 +0000 UTC m=+49.602566559" watchObservedRunningTime="2026-04-17 23:52:15.958937501 +0000 UTC m=+50.732118030" Apr 17 23:52:16.225093 containerd[1575]: time="2026-04-17T23:52:16.224995771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:16.226364 containerd[1575]: time="2026-04-17T23:52:16.225698792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:52:16.227598 containerd[1575]: time="2026-04-17T23:52:16.227533825Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:16.232719 containerd[1575]: time="2026-04-17T23:52:16.232263172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:16.236261 containerd[1575]: time="2026-04-17T23:52:16.236192049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.69539916s" Apr 17 23:52:16.236512 containerd[1575]: time="2026-04-17T23:52:16.236299210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:52:16.237945 containerd[1575]: time="2026-04-17T23:52:16.237928202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:52:16.249606 containerd[1575]: time="2026-04-17T23:52:16.248669877Z" level=info msg="CreateContainer within sandbox \"805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:52:16.283105 containerd[1575]: time="2026-04-17T23:52:16.282997887Z" level=info msg="CreateContainer within sandbox \"805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d1091b83670111a4f1c65a0fc40e8ef052acd23619ed3e7db9bdbaf3ee97a891\"" Apr 17 23:52:16.283944 containerd[1575]: time="2026-04-17T23:52:16.283890242Z" level=info msg="StartContainer for \"d1091b83670111a4f1c65a0fc40e8ef052acd23619ed3e7db9bdbaf3ee97a891\"" Apr 17 23:52:16.432291 containerd[1575]: time="2026-04-17T23:52:16.432118065Z" level=info msg="StartContainer for \"d1091b83670111a4f1c65a0fc40e8ef052acd23619ed3e7db9bdbaf3ee97a891\" returns successfully" Apr 17 23:52:17.709910 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:55154.service - OpenSSH per-connection server daemon (10.0.0.1:55154). Apr 17 23:52:17.783258 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 55154 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:17.785107 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:17.794506 systemd-logind[1553]: New session 11 of user core. Apr 17 23:52:17.800893 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:52:17.958821 containerd[1575]: time="2026-04-17T23:52:17.958748746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:17.960414 containerd[1575]: time="2026-04-17T23:52:17.960230681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:52:17.961914 containerd[1575]: time="2026-04-17T23:52:17.961799190Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:17.964500 containerd[1575]: time="2026-04-17T23:52:17.964334491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:17.965536 containerd[1575]: time="2026-04-17T23:52:17.965431821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.727418989s" Apr 17 23:52:17.965536 containerd[1575]: time="2026-04-17T23:52:17.965551738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:52:17.967618 containerd[1575]: time="2026-04-17T23:52:17.967516294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:52:17.972615 containerd[1575]: time="2026-04-17T23:52:17.972266177Z" level=info msg="CreateContainer within sandbox \"a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:52:18.036045 containerd[1575]: time="2026-04-17T23:52:18.035805856Z" level=info msg="CreateContainer within sandbox \"a9fcfdabb7bfe6954e372c56beb607d22a5c96eaaa23d07e80314710ee37f129\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7eb91a4b6ccf9f13fa47f348c85adde10c295048f7f371b4741104f151056791\"" Apr 17 23:52:18.039513 containerd[1575]: time="2026-04-17T23:52:18.037320447Z" level=info msg="StartContainer for \"7eb91a4b6ccf9f13fa47f348c85adde10c295048f7f371b4741104f151056791\"" Apr 17 23:52:18.169558 containerd[1575]: time="2026-04-17T23:52:18.169392820Z" level=info msg="StartContainer for \"7eb91a4b6ccf9f13fa47f348c85adde10c295048f7f371b4741104f151056791\" returns successfully" Apr 17 23:52:18.237695 sshd[5544]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:18.248758 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:55162.service - OpenSSH per-connection server daemon (10.0.0.1:55162). Apr 17 23:52:18.250164 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:55154.service: Deactivated successfully. Apr 17 23:52:18.253749 systemd-logind[1553]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:52:18.255312 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:52:18.257547 systemd-logind[1553]: Removed session 11. Apr 17 23:52:18.282401 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 55162 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:18.284157 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:18.294785 systemd-logind[1553]: New session 12 of user core. Apr 17 23:52:18.306995 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:52:18.520379 sshd[5602]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:18.530703 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:55176.service - OpenSSH per-connection server daemon (10.0.0.1:55176). Apr 17 23:52:18.532915 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:55162.service: Deactivated successfully. Apr 17 23:52:18.534335 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:52:18.541566 systemd-logind[1553]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:52:18.543765 systemd-logind[1553]: Removed session 12. Apr 17 23:52:18.607829 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 55176 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:18.609611 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:18.615253 systemd-logind[1553]: New session 13 of user core. Apr 17 23:52:18.635665 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:52:18.745666 kubelet[2677]: I0417 23:52:18.745564 2677 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:52:18.748925 kubelet[2677]: I0417 23:52:18.748861 2677 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:52:18.873014 sshd[5616]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:18.876252 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:55176.service: Deactivated successfully. Apr 17 23:52:18.879358 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:52:18.880130 systemd-logind[1553]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:52:18.881691 systemd-logind[1553]: Removed session 13. Apr 17 23:52:20.227121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645965172.mount: Deactivated successfully. Apr 17 23:52:20.256924 containerd[1575]: time="2026-04-17T23:52:20.256805310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:20.257971 containerd[1575]: time="2026-04-17T23:52:20.257882791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:52:20.259403 containerd[1575]: time="2026-04-17T23:52:20.259289891Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:20.264061 containerd[1575]: time="2026-04-17T23:52:20.263955266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.29637209s" Apr 17 23:52:20.264061 containerd[1575]: time="2026-04-17T23:52:20.264066320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:52:20.264307 containerd[1575]: time="2026-04-17T23:52:20.264073612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:52:20.282735 containerd[1575]: time="2026-04-17T23:52:20.282668663Z" level=info msg="CreateContainer within sandbox \"805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:52:20.304095 containerd[1575]: time="2026-04-17T23:52:20.304008017Z" level=info msg="CreateContainer within sandbox \"805ec1caea713125431304a30d7679ce96a0a4a68bc98541d7932ca933b07a7e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d30b99f4a6fe97fe98f42596969c5a1e091d32e501f5bc63c54becefdb7aac42\"" Apr 17 23:52:20.305254 containerd[1575]: time="2026-04-17T23:52:20.304900732Z" level=info msg="StartContainer for \"d30b99f4a6fe97fe98f42596969c5a1e091d32e501f5bc63c54becefdb7aac42\"" Apr 17 23:52:20.401716 containerd[1575]: time="2026-04-17T23:52:20.400888924Z" level=info msg="StartContainer for \"d30b99f4a6fe97fe98f42596969c5a1e091d32e501f5bc63c54becefdb7aac42\" returns successfully" Apr 17 23:52:20.863394 kubelet[2677]: I0417 23:52:20.863318 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nxr77" podStartSLOduration=23.859663974 podStartE2EDuration="38.863300732s" podCreationTimestamp="2026-04-17 23:51:42 +0000 UTC" firstStartedPulling="2026-04-17 23:52:02.963651499 +0000 UTC m=+37.736832017" lastFinishedPulling="2026-04-17 23:52:17.967288254 +0000 UTC m=+52.740468775" observedRunningTime="2026-04-17 23:52:18.844178303 +0000 UTC m=+53.617358852" watchObservedRunningTime="2026-04-17 23:52:20.863300732 +0000 UTC m=+55.636481261" Apr 17 23:52:23.890947 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:48752.service - OpenSSH per-connection server daemon (10.0.0.1:48752). Apr 17 23:52:23.940821 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 48752 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:23.942315 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:23.948190 systemd-logind[1553]: New session 14 of user core. Apr 17 23:52:23.956606 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:52:24.173170 sshd[5683]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:24.180957 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:48766.service - OpenSSH per-connection server daemon (10.0.0.1:48766). Apr 17 23:52:24.181598 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:48752.service: Deactivated successfully. Apr 17 23:52:24.186020 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:52:24.188535 systemd-logind[1553]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:52:24.190575 systemd-logind[1553]: Removed session 14. Apr 17 23:52:24.215367 sshd[5695]: Accepted publickey for core from 10.0.0.1 port 48766 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:24.218213 sshd[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:24.224660 systemd-logind[1553]: New session 15 of user core. Apr 17 23:52:24.241628 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:52:24.530966 sshd[5695]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:24.541376 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:48770.service - OpenSSH per-connection server daemon (10.0.0.1:48770). Apr 17 23:52:24.542876 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:48766.service: Deactivated successfully. Apr 17 23:52:24.545333 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:52:24.546979 systemd-logind[1553]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:52:24.548831 systemd-logind[1553]: Removed session 15. Apr 17 23:52:24.576195 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 48770 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:24.577640 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:24.586732 systemd-logind[1553]: New session 16 of user core. Apr 17 23:52:24.596097 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:52:25.353726 sshd[5708]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:25.366763 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:48772.service - OpenSSH per-connection server daemon (10.0.0.1:48772). Apr 17 23:52:25.371371 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:48770.service: Deactivated successfully. Apr 17 23:52:25.389653 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:52:25.405635 containerd[1575]: time="2026-04-17T23:52:25.405577249Z" level=info msg="StopPodSandbox for \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\"" Apr 17 23:52:25.405749 systemd-logind[1553]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:52:25.412397 systemd-logind[1553]: Removed session 16. Apr 17 23:52:25.499185 sshd[5731]: Accepted publickey for core from 10.0.0.1 port 48772 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:25.500992 sshd[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:25.507798 systemd-logind[1553]: New session 17 of user core. Apr 17 23:52:25.513721 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.538 [WARNING][5761] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0", GenerateName:"calico-apiserver-5b9f7b68ff-", Namespace:"calico-system", SelfLink:"", UID:"08184ab9-9f04-4144-ba8e-b4322834631d", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f7b68ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e", Pod:"calico-apiserver-5b9f7b68ff-rghf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali601020e3276", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.542 [INFO][5761] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.542 [INFO][5761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" iface="eth0" netns="" Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.542 [INFO][5761] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.542 [INFO][5761] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.613 [INFO][5779] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" HandleID="k8s-pod-network.27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.619 [INFO][5779] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.619 [INFO][5779] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.630 [WARNING][5779] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" HandleID="k8s-pod-network.27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.630 [INFO][5779] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" HandleID="k8s-pod-network.27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.635 [INFO][5779] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:25.644789 containerd[1575]: 2026-04-17 23:52:25.638 [INFO][5761] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:25.657231 containerd[1575]: time="2026-04-17T23:52:25.656187005Z" level=info msg="TearDown network for sandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\" successfully" Apr 17 23:52:25.657231 containerd[1575]: time="2026-04-17T23:52:25.656213979Z" level=info msg="StopPodSandbox for \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\" returns successfully" Apr 17 23:52:25.699135 containerd[1575]: time="2026-04-17T23:52:25.699025794Z" level=info msg="RemovePodSandbox for \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\"" Apr 17 23:52:25.711340 containerd[1575]: time="2026-04-17T23:52:25.711153453Z" level=info msg="Forcibly stopping sandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\"" Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.774 [WARNING][5803] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0", GenerateName:"calico-apiserver-5b9f7b68ff-", Namespace:"calico-system", SelfLink:"", UID:"08184ab9-9f04-4144-ba8e-b4322834631d", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f7b68ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69caa091da9f802988e646cf7cba9e98c0fec5952502ca70d7c2950ba6217b0e", Pod:"calico-apiserver-5b9f7b68ff-rghf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali601020e3276", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.775 [INFO][5803] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.775 [INFO][5803] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" iface="eth0" netns="" Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.775 [INFO][5803] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.775 [INFO][5803] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.820 [INFO][5811] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" HandleID="k8s-pod-network.27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.821 [INFO][5811] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.821 [INFO][5811] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.832 [WARNING][5811] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" HandleID="k8s-pod-network.27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.832 [INFO][5811] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" HandleID="k8s-pod-network.27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--rghf4-eth0" Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.836 [INFO][5811] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:25.842634 containerd[1575]: 2026-04-17 23:52:25.839 [INFO][5803] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e" Apr 17 23:52:25.842634 containerd[1575]: time="2026-04-17T23:52:25.842326590Z" level=info msg="TearDown network for sandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\" successfully" Apr 17 23:52:25.883953 containerd[1575]: time="2026-04-17T23:52:25.883820784Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:25.884198 containerd[1575]: time="2026-04-17T23:52:25.883999628Z" level=info msg="RemovePodSandbox \"27aa0cd47777b0134273a41cd330d1960b0e29975e697f62e875ee27a67ea59e\" returns successfully" Apr 17 23:52:25.892744 containerd[1575]: time="2026-04-17T23:52:25.892684046Z" level=info msg="StopPodSandbox for \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\"" Apr 17 23:52:25.973207 sshd[5731]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:25.988711 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:48778.service - OpenSSH per-connection server daemon (10.0.0.1:48778). Apr 17 23:52:25.989257 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:48772.service: Deactivated successfully. Apr 17 23:52:25.995621 systemd-logind[1553]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:52:25.996408 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:52:26.002616 systemd-logind[1553]: Removed session 17. Apr 17 23:52:26.025117 sshd[5837]: Accepted publickey for core from 10.0.0.1 port 48778 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:26.025832 sshd[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:26.031126 systemd-logind[1553]: New session 18 of user core. Apr 17 23:52:26.040279 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:25.977 [WARNING][5829] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a23d8751-4902-4d1d-8ccf-8b84b4c25b8b", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa", Pod:"coredns-674b8bbfcf-qpwz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad3eb5973e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:25.978 [INFO][5829] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:25.978 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" iface="eth0" netns="" Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:25.978 [INFO][5829] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:25.978 [INFO][5829] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:26.039 [INFO][5842] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" HandleID="k8s-pod-network.9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:26.039 [INFO][5842] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:26.039 [INFO][5842] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:26.049 [WARNING][5842] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" HandleID="k8s-pod-network.9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:26.049 [INFO][5842] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" HandleID="k8s-pod-network.9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:26.051 [INFO][5842] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:26.055687 containerd[1575]: 2026-04-17 23:52:26.053 [INFO][5829] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:26.056144 containerd[1575]: time="2026-04-17T23:52:26.055684898Z" level=info msg="TearDown network for sandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\" successfully" Apr 17 23:52:26.056144 containerd[1575]: time="2026-04-17T23:52:26.055707997Z" level=info msg="StopPodSandbox for \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\" returns successfully" Apr 17 23:52:26.056479 containerd[1575]: time="2026-04-17T23:52:26.056276389Z" level=info msg="RemovePodSandbox for \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\"" Apr 17 23:52:26.056479 containerd[1575]: time="2026-04-17T23:52:26.056369824Z" level=info msg="Forcibly stopping sandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\"" Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.103 [WARNING][5862] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a23d8751-4902-4d1d-8ccf-8b84b4c25b8b", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b65770fd5eaf6182a00b4d77c727d68564fa5e73ffd69be45c449de6c4e11fa", Pod:"coredns-674b8bbfcf-qpwz2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad3eb5973e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.103 [INFO][5862] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.103 [INFO][5862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" iface="eth0" netns="" Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.103 [INFO][5862] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.104 [INFO][5862] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.131 [INFO][5877] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" HandleID="k8s-pod-network.9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.132 [INFO][5877] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.132 [INFO][5877] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.140 [WARNING][5877] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" HandleID="k8s-pod-network.9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.140 [INFO][5877] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" HandleID="k8s-pod-network.9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Workload="localhost-k8s-coredns--674b8bbfcf--qpwz2-eth0" Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.142 [INFO][5877] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:26.147177 containerd[1575]: 2026-04-17 23:52:26.145 [INFO][5862] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e" Apr 17 23:52:26.148224 containerd[1575]: time="2026-04-17T23:52:26.147209633Z" level=info msg="TearDown network for sandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\" successfully" Apr 17 23:52:26.156150 containerd[1575]: time="2026-04-17T23:52:26.156045162Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:26.156150 containerd[1575]: time="2026-04-17T23:52:26.156144676Z" level=info msg="RemovePodSandbox \"9f2873d8d94117377fc3dc605cafea480c580445399d6c94658025be493dff5e\" returns successfully" Apr 17 23:52:26.157134 containerd[1575]: time="2026-04-17T23:52:26.156865900Z" level=info msg="StopPodSandbox for \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\"" Apr 17 23:52:26.197369 sshd[5837]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:26.201666 systemd-logind[1553]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:52:26.202329 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:48778.service: Deactivated successfully. Apr 17 23:52:26.206573 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:52:26.208782 systemd-logind[1553]: Removed session 18. Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.209 [WARNING][5896] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0", GenerateName:"calico-kube-controllers-d9f98598b-", Namespace:"calico-system", SelfLink:"", UID:"304b7f98-005c-457c-9b59-72da9a1db780", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9f98598b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02", Pod:"calico-kube-controllers-d9f98598b-s7zb8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea2d562a022", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.209 [INFO][5896] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.209 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" iface="eth0" netns="" Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.209 [INFO][5896] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.209 [INFO][5896] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.254 [INFO][5908] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" HandleID="k8s-pod-network.237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.255 [INFO][5908] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.255 [INFO][5908] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.265 [WARNING][5908] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" HandleID="k8s-pod-network.237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.265 [INFO][5908] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" HandleID="k8s-pod-network.237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.270 [INFO][5908] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:26.275599 containerd[1575]: 2026-04-17 23:52:26.272 [INFO][5896] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:26.275599 containerd[1575]: time="2026-04-17T23:52:26.275373198Z" level=info msg="TearDown network for sandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\" successfully" Apr 17 23:52:26.275599 containerd[1575]: time="2026-04-17T23:52:26.275429638Z" level=info msg="StopPodSandbox for \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\" returns successfully" Apr 17 23:52:26.276929 containerd[1575]: time="2026-04-17T23:52:26.276829808Z" level=info msg="RemovePodSandbox for \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\"" Apr 17 23:52:26.277148 containerd[1575]: time="2026-04-17T23:52:26.277033437Z" level=info msg="Forcibly stopping sandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\"" Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.334 [WARNING][5926] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0", GenerateName:"calico-kube-controllers-d9f98598b-", Namespace:"calico-system", SelfLink:"", UID:"304b7f98-005c-457c-9b59-72da9a1db780", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9f98598b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5364500895d1f7f17fdfa281019a98e6e930a3927ff5f7c0361c3496cc2cba02", Pod:"calico-kube-controllers-d9f98598b-s7zb8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea2d562a022", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.335 [INFO][5926] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.335 [INFO][5926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" iface="eth0" netns="" Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.335 [INFO][5926] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.335 [INFO][5926] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.378 [INFO][5935] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" HandleID="k8s-pod-network.237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.379 [INFO][5935] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.379 [INFO][5935] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.390 [WARNING][5935] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" HandleID="k8s-pod-network.237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.390 [INFO][5935] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" HandleID="k8s-pod-network.237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Workload="localhost-k8s-calico--kube--controllers--d9f98598b--s7zb8-eth0" Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.394 [INFO][5935] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:26.399511 containerd[1575]: 2026-04-17 23:52:26.396 [INFO][5926] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b" Apr 17 23:52:26.399511 containerd[1575]: time="2026-04-17T23:52:26.399487277Z" level=info msg="TearDown network for sandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\" successfully" Apr 17 23:52:26.403182 containerd[1575]: time="2026-04-17T23:52:26.403055767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:26.403182 containerd[1575]: time="2026-04-17T23:52:26.403125356Z" level=info msg="RemovePodSandbox \"237ef8fe087385adc92e568ceb0684c796d6ab5f9969d48762eae7de70d9694b\" returns successfully" Apr 17 23:52:26.404147 containerd[1575]: time="2026-04-17T23:52:26.404104872Z" level=info msg="StopPodSandbox for \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\"" Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.468 [WARNING][5954] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0", GenerateName:"calico-apiserver-5b9f7b68ff-", Namespace:"calico-system", SelfLink:"", UID:"ea532de7-7ec9-4f7c-9d00-97d7c422c363", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f7b68ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae", Pod:"calico-apiserver-5b9f7b68ff-t4zc8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibbda5b7e062", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.470 [INFO][5954] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.471 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" iface="eth0" netns="" Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.471 [INFO][5954] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.471 [INFO][5954] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.517 [INFO][5963] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" HandleID="k8s-pod-network.30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.518 [INFO][5963] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.518 [INFO][5963] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.540 [WARNING][5963] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" HandleID="k8s-pod-network.30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.540 [INFO][5963] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" HandleID="k8s-pod-network.30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.547 [INFO][5963] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:26.552482 containerd[1575]: 2026-04-17 23:52:26.549 [INFO][5954] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:26.552482 containerd[1575]: time="2026-04-17T23:52:26.552349016Z" level=info msg="TearDown network for sandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\" successfully" Apr 17 23:52:26.552482 containerd[1575]: time="2026-04-17T23:52:26.552385574Z" level=info msg="StopPodSandbox for \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\" returns successfully" Apr 17 23:52:26.554811 containerd[1575]: time="2026-04-17T23:52:26.554240931Z" level=info msg="RemovePodSandbox for \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\"" Apr 17 23:52:26.554811 containerd[1575]: time="2026-04-17T23:52:26.554283207Z" level=info msg="Forcibly stopping sandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\"" Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.619 [WARNING][5982] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0", GenerateName:"calico-apiserver-5b9f7b68ff-", Namespace:"calico-system", SelfLink:"", UID:"ea532de7-7ec9-4f7c-9d00-97d7c422c363", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f7b68ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc0b8bb297a39a5f9dd3821fb2717bb26e3ce67cb87e1d51a96a988fab3f09ae", Pod:"calico-apiserver-5b9f7b68ff-t4zc8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibbda5b7e062", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.619 [INFO][5982] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.620 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" iface="eth0" netns="" Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.620 [INFO][5982] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.620 [INFO][5982] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.652 [INFO][5990] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" HandleID="k8s-pod-network.30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.653 [INFO][5990] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.653 [INFO][5990] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.664 [WARNING][5990] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" HandleID="k8s-pod-network.30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.664 [INFO][5990] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" HandleID="k8s-pod-network.30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Workload="localhost-k8s-calico--apiserver--5b9f7b68ff--t4zc8-eth0" Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.668 [INFO][5990] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:26.675212 containerd[1575]: 2026-04-17 23:52:26.672 [INFO][5982] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca" Apr 17 23:52:26.675212 containerd[1575]: time="2026-04-17T23:52:26.675227392Z" level=info msg="TearDown network for sandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\" successfully" Apr 17 23:52:26.679952 containerd[1575]: time="2026-04-17T23:52:26.679775537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:26.679952 containerd[1575]: time="2026-04-17T23:52:26.679899107Z" level=info msg="RemovePodSandbox \"30fea9c63c33a66277688686483a85347445401acfa32588add2ccae08c56cca\" returns successfully" Apr 17 23:52:26.680722 containerd[1575]: time="2026-04-17T23:52:26.680685026Z" level=info msg="StopPodSandbox for \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\"" Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.733 [WARNING][6008] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" WorkloadEndpoint="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.734 [INFO][6008] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.734 [INFO][6008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" iface="eth0" netns="" Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.734 [INFO][6008] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.734 [INFO][6008] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.770 [INFO][6016] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" HandleID="k8s-pod-network.0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Workload="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.771 [INFO][6016] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.772 [INFO][6016] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.782 [WARNING][6016] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" HandleID="k8s-pod-network.0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Workload="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.783 [INFO][6016] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" HandleID="k8s-pod-network.0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Workload="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.787 [INFO][6016] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:26.792559 containerd[1575]: 2026-04-17 23:52:26.789 [INFO][6008] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:26.793201 containerd[1575]: time="2026-04-17T23:52:26.792616734Z" level=info msg="TearDown network for sandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\" successfully" Apr 17 23:52:26.793201 containerd[1575]: time="2026-04-17T23:52:26.792652268Z" level=info msg="StopPodSandbox for \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\" returns successfully" Apr 17 23:52:26.793387 containerd[1575]: time="2026-04-17T23:52:26.793339763Z" level=info msg="RemovePodSandbox for \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\"" Apr 17 23:52:26.793421 containerd[1575]: time="2026-04-17T23:52:26.793399647Z" level=info msg="Forcibly stopping sandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\"" Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.842 [WARNING][6033] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" WorkloadEndpoint="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.842 [INFO][6033] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.842 [INFO][6033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" iface="eth0" netns="" Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.842 [INFO][6033] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.842 [INFO][6033] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.889 [INFO][6042] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" HandleID="k8s-pod-network.0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Workload="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.890 [INFO][6042] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.890 [INFO][6042] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.906 [WARNING][6042] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" HandleID="k8s-pod-network.0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Workload="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.906 [INFO][6042] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" HandleID="k8s-pod-network.0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Workload="localhost-k8s-whisker--cbd55db78--5z6c4-eth0" Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.913 [INFO][6042] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:26.919833 containerd[1575]: 2026-04-17 23:52:26.916 [INFO][6033] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582" Apr 17 23:52:26.919833 containerd[1575]: time="2026-04-17T23:52:26.919186599Z" level=info msg="TearDown network for sandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\" successfully" Apr 17 23:52:26.924980 containerd[1575]: time="2026-04-17T23:52:26.924748418Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:26.924980 containerd[1575]: time="2026-04-17T23:52:26.924833565Z" level=info msg="RemovePodSandbox \"0ee64b61871e8d2b8d12a3fc16811d134d7fdee67aa45bcd25922b118e0f7582\" returns successfully" Apr 17 23:52:26.925640 containerd[1575]: time="2026-04-17T23:52:26.925588739Z" level=info msg="StopPodSandbox for \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\"" Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:26.993 [WARNING][6060] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"db5c2130-0e95-4916-badc-e8ed1ee5a320", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c", Pod:"coredns-674b8bbfcf-k6w8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90255fd1f65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:26.993 [INFO][6060] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:26.993 [INFO][6060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" iface="eth0" netns="" Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:26.993 [INFO][6060] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:26.993 [INFO][6060] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:27.040 [INFO][6069] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" HandleID="k8s-pod-network.e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:27.041 [INFO][6069] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:27.041 [INFO][6069] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:27.049 [WARNING][6069] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" HandleID="k8s-pod-network.e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:27.049 [INFO][6069] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" HandleID="k8s-pod-network.e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:27.053 [INFO][6069] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:27.058806 containerd[1575]: 2026-04-17 23:52:27.055 [INFO][6060] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:27.059785 containerd[1575]: time="2026-04-17T23:52:27.058951680Z" level=info msg="TearDown network for sandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\" successfully" Apr 17 23:52:27.059785 containerd[1575]: time="2026-04-17T23:52:27.058988829Z" level=info msg="StopPodSandbox for \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\" returns successfully" Apr 17 23:52:27.060211 containerd[1575]: time="2026-04-17T23:52:27.060116391Z" level=info msg="RemovePodSandbox for \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\"" Apr 17 23:52:27.060654 containerd[1575]: time="2026-04-17T23:52:27.060252450Z" level=info msg="Forcibly stopping sandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\"" Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.132 [WARNING][6087] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"db5c2130-0e95-4916-badc-e8ed1ee5a320", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1abb8dbe5d26f885305f919e84fe536910479cff74002631fc7acae20a2b01c", Pod:"coredns-674b8bbfcf-k6w8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90255fd1f65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.132 [INFO][6087] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.132 [INFO][6087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" iface="eth0" netns="" Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.132 [INFO][6087] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.132 [INFO][6087] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.164 [INFO][6096] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" HandleID="k8s-pod-network.e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.165 [INFO][6096] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.165 [INFO][6096] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.172 [WARNING][6096] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" HandleID="k8s-pod-network.e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.172 [INFO][6096] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" HandleID="k8s-pod-network.e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Workload="localhost-k8s-coredns--674b8bbfcf--k6w8b-eth0" Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.178 [INFO][6096] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:27.188601 containerd[1575]: 2026-04-17 23:52:27.185 [INFO][6087] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7" Apr 17 23:52:27.188601 containerd[1575]: time="2026-04-17T23:52:27.188375110Z" level=info msg="TearDown network for sandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\" successfully" Apr 17 23:52:27.194297 containerd[1575]: time="2026-04-17T23:52:27.194074954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:27.194297 containerd[1575]: time="2026-04-17T23:52:27.194175946Z" level=info msg="RemovePodSandbox \"e489aedb58727de5c79a2bee35ecc611576dbd2814ad017ab0bf02e7ae3045b7\" returns successfully" Apr 17 23:52:27.195025 containerd[1575]: time="2026-04-17T23:52:27.194976054Z" level=info msg="StopPodSandbox for \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\"" Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.255 [WARNING][6114] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--xtkhw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"be1918a4-9e90-448a-95bf-d09779e58ce9", ResourceVersion:"1212", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396", Pod:"goldmane-5b85766d88-xtkhw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f3169b5a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.256 [INFO][6114] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.256 [INFO][6114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" iface="eth0" netns="" Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.256 [INFO][6114] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.256 [INFO][6114] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.288 [INFO][6123] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" HandleID="k8s-pod-network.3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.288 [INFO][6123] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.288 [INFO][6123] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.302 [WARNING][6123] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" HandleID="k8s-pod-network.3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.302 [INFO][6123] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" HandleID="k8s-pod-network.3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.308 [INFO][6123] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:27.313733 containerd[1575]: 2026-04-17 23:52:27.311 [INFO][6114] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:27.313733 containerd[1575]: time="2026-04-17T23:52:27.313724519Z" level=info msg="TearDown network for sandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\" successfully" Apr 17 23:52:27.313733 containerd[1575]: time="2026-04-17T23:52:27.313763314Z" level=info msg="StopPodSandbox for \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\" returns successfully" Apr 17 23:52:27.314676 containerd[1575]: time="2026-04-17T23:52:27.314613940Z" level=info msg="RemovePodSandbox for \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\"" Apr 17 23:52:27.314705 containerd[1575]: time="2026-04-17T23:52:27.314677213Z" level=info msg="Forcibly stopping sandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\"" Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.373 [WARNING][6141] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--xtkhw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"be1918a4-9e90-448a-95bf-d09779e58ce9", ResourceVersion:"1212", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 51, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c085736d2a2d1f61ad2bf7929774c7fd7f2f801d35a14584a7fe0187fb17396", Pod:"goldmane-5b85766d88-xtkhw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f3169b5a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.374 [INFO][6141] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.374 [INFO][6141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" iface="eth0" netns="" Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.374 [INFO][6141] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.374 [INFO][6141] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.452 [INFO][6150] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" HandleID="k8s-pod-network.3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.453 [INFO][6150] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.453 [INFO][6150] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.462 [WARNING][6150] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" HandleID="k8s-pod-network.3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.462 [INFO][6150] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" HandleID="k8s-pod-network.3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Workload="localhost-k8s-goldmane--5b85766d88--xtkhw-eth0" Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.466 [INFO][6150] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:52:27.470350 containerd[1575]: 2026-04-17 23:52:27.468 [INFO][6141] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793" Apr 17 23:52:27.470783 containerd[1575]: time="2026-04-17T23:52:27.470393589Z" level=info msg="TearDown network for sandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\" successfully" Apr 17 23:52:27.476063 containerd[1575]: time="2026-04-17T23:52:27.475928376Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:52:27.476063 containerd[1575]: time="2026-04-17T23:52:27.476031142Z" level=info msg="RemovePodSandbox \"3c2253c684a5ac4b5d170063c86f4f62f73a8e045f33982a6ce3af9295c9d793\" returns successfully" Apr 17 23:52:31.211807 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:57064.service - OpenSSH per-connection server daemon (10.0.0.1:57064). Apr 17 23:52:31.242502 sshd[6162]: Accepted publickey for core from 10.0.0.1 port 57064 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:31.244120 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:31.249841 systemd-logind[1553]: New session 19 of user core. Apr 17 23:52:31.261020 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:52:31.391258 sshd[6162]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:31.395144 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:57064.service: Deactivated successfully. Apr 17 23:52:31.397713 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:52:31.397727 systemd-logind[1553]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:52:31.399632 systemd-logind[1553]: Removed session 19. Apr 17 23:52:34.952216 kubelet[2677]: I0417 23:52:34.952044 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-789694cb9f-nxp2s" podStartSLOduration=16.279412188 podStartE2EDuration="31.952026473s" podCreationTimestamp="2026-04-17 23:52:03 +0000 UTC" firstStartedPulling="2026-04-17 23:52:04.593636902 +0000 UTC m=+39.366817421" lastFinishedPulling="2026-04-17 23:52:20.266251188 +0000 UTC m=+55.039431706" observedRunningTime="2026-04-17 23:52:20.864533712 +0000 UTC m=+55.637714241" watchObservedRunningTime="2026-04-17 23:52:34.952026473 +0000 UTC m=+69.725207005" Apr 17 23:52:36.409136 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:57220.service - OpenSSH per-connection server daemon (10.0.0.1:57220). Apr 17 23:52:36.471269 sshd[6202]: Accepted publickey for core from 10.0.0.1 port 57220 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:52:36.474240 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:52:36.481563 systemd-logind[1553]: New session 20 of user core. Apr 17 23:52:36.486630 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:52:36.666785 sshd[6202]: pam_unix(sshd:session): session closed for user core Apr 17 23:52:36.671323 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:57220.service: Deactivated successfully. Apr 17 23:52:36.674101 systemd-logind[1553]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:52:36.674133 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:52:36.676579 systemd-logind[1553]: Removed session 20.