Apr 14 01:00:31.647259 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 01:00:31.647300 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 01:00:31.647317 kernel: BIOS-provided physical RAM map: Apr 14 01:00:31.647326 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 01:00:31.647334 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 01:00:31.647342 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 01:00:31.647351 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 01:00:31.647416 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 01:00:31.647424 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 01:00:31.647436 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 01:00:31.647445 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 01:00:31.647453 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 01:00:31.647479 kernel: NX (Execute Disable) protection: active Apr 14 01:00:31.647488 kernel: APIC: Static calls initialized Apr 14 01:00:31.647498 kernel: SMBIOS 2.8 present. Apr 14 01:00:31.647524 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 01:00:31.647533 kernel: Hypervisor detected: KVM Apr 14 01:00:31.647542 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 01:00:31.647551 kernel: kvm-clock: using sched offset of 10806444966 cycles Apr 14 01:00:31.647560 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 01:00:31.647569 kernel: tsc: Detected 2793.438 MHz processor Apr 14 01:00:31.647579 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 01:00:31.647590 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 01:00:31.647600 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 01:00:31.647611 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 01:00:31.647620 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 01:00:31.647629 kernel: Using GB pages for direct mapping Apr 14 01:00:31.647637 kernel: ACPI: Early table checksum verification disabled Apr 14 01:00:31.647646 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 01:00:31.647655 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:00:31.647663 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:00:31.647671 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:00:31.647680 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 01:00:31.647692 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:00:31.647701 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:00:31.647710 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:00:31.647719 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:00:31.647727 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 01:00:31.647736 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 01:00:31.647745 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 01:00:31.647758 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 01:00:31.647771 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 01:00:31.647780 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 01:00:31.647789 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 01:00:31.647798 kernel: No NUMA configuration found Apr 14 01:00:31.647808 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 01:00:31.647818 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 01:00:31.647830 kernel: Zone ranges: Apr 14 01:00:31.647839 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 01:00:31.647876 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 01:00:31.647886 kernel: Normal empty Apr 14 01:00:31.647895 kernel: Movable zone start for each node Apr 14 01:00:31.647905 kernel: Early memory node ranges Apr 14 01:00:31.647914 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 01:00:31.647924 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 01:00:31.647934 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 01:00:31.647948 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 01:00:31.647958 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 01:00:31.647987 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 01:00:31.647997 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 01:00:31.648007 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 01:00:31.648016 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 01:00:31.648025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 01:00:31.648034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 01:00:31.648044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 01:00:31.648057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 01:00:31.648067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 01:00:31.648076 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 01:00:31.648086 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 01:00:31.648095 kernel: TSC deadline timer available Apr 14 01:00:31.648104 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 01:00:31.648114 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 01:00:31.648123 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 01:00:31.648132 kernel: kvm-guest: setup PV sched yield Apr 14 01:00:31.648159 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 01:00:31.648172 kernel: Booting paravirtualized kernel on KVM Apr 14 01:00:31.648183 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 01:00:31.648193 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 01:00:31.648202 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 01:00:31.648211 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 01:00:31.648220 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 01:00:31.648230 kernel: kvm-guest: PV spinlocks enabled Apr 14 01:00:31.648239 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 01:00:31.648250 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 01:00:31.648263 kernel: random: crng init done Apr 14 01:00:31.648272 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 01:00:31.648281 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 01:00:31.648291 kernel: Fallback order for Node 0: 0 Apr 14 01:00:31.648302 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 01:00:31.648311 kernel: Policy zone: DMA32 Apr 14 01:00:31.648320 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 01:00:31.648330 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 14 01:00:31.648342 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 01:00:31.648351 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 01:00:31.648922 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 01:00:31.648939 kernel: Dynamic Preempt: voluntary Apr 14 01:00:31.648948 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 01:00:31.648959 kernel: rcu: RCU event tracing is enabled. Apr 14 01:00:31.648968 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 01:00:31.648977 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 01:00:31.648988 kernel: Rude variant of Tasks RCU enabled. Apr 14 01:00:31.649018 kernel: Tracing variant of Tasks RCU enabled. Apr 14 01:00:31.649027 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 01:00:31.649036 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 01:00:31.649045 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 01:00:31.649073 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 01:00:31.649083 kernel: Console: colour VGA+ 80x25 Apr 14 01:00:31.649093 kernel: printk: console [ttyS0] enabled Apr 14 01:00:31.649101 kernel: ACPI: Core revision 20230628 Apr 14 01:00:31.649111 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 01:00:31.649123 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 01:00:31.649133 kernel: x2apic enabled Apr 14 01:00:31.649143 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 01:00:31.649152 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 01:00:31.649162 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 01:00:31.649172 kernel: kvm-guest: setup PV IPIs Apr 14 01:00:31.649182 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 01:00:31.649193 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 01:00:31.649213 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 01:00:31.649223 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 01:00:31.649233 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 01:00:31.649245 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 01:00:31.649254 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 01:00:31.649264 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 01:00:31.649273 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 01:00:31.649284 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 01:00:31.649297 kernel: RETBleed: Vulnerable Apr 14 01:00:31.649307 kernel: Speculative Store Bypass: Vulnerable Apr 14 01:00:31.649317 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 01:00:31.649350 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 01:00:31.649396 kernel: active return thunk: its_return_thunk Apr 14 01:00:31.649407 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 01:00:31.649417 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 01:00:31.649428 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 01:00:31.649437 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 01:00:31.649452 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 01:00:31.649462 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 01:00:31.649472 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 01:00:31.649482 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 01:00:31.649492 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 01:00:31.649503 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 01:00:31.649512 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 01:00:31.649523 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 01:00:31.649534 kernel: Freeing SMP alternatives memory: 32K Apr 14 01:00:31.649548 kernel: pid_max: default: 32768 minimum: 301 Apr 14 01:00:31.649560 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 01:00:31.649570 kernel: landlock: Up and running. Apr 14 01:00:31.649580 kernel: SELinux: Initializing. Apr 14 01:00:31.649590 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 01:00:31.649600 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 01:00:31.649610 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 01:00:31.650069 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 01:00:31.651101 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 01:00:31.651173 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 01:00:31.651185 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 01:00:31.651196 kernel: signal: max sigframe size: 3632 Apr 14 01:00:31.651207 kernel: rcu: Hierarchical SRCU implementation. Apr 14 01:00:31.651220 kernel: rcu: Max phase no-delay instances is 400. Apr 14 01:00:31.651231 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 01:00:31.651242 kernel: smp: Bringing up secondary CPUs ... Apr 14 01:00:31.651253 kernel: smpboot: x86: Booting SMP configuration: Apr 14 01:00:31.651263 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 01:00:31.651277 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 01:00:31.651288 kernel: smpboot: Max logical packages: 1 Apr 14 01:00:31.651299 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 01:00:31.651311 kernel: devtmpfs: initialized Apr 14 01:00:31.651322 kernel: x86/mm: Memory block size: 128MB Apr 14 01:00:31.651332 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 01:00:31.651344 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 01:00:31.651355 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 01:00:31.651404 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 01:00:31.651419 kernel: audit: initializing netlink subsys (disabled) Apr 14 01:00:31.651429 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 01:00:31.651440 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 01:00:31.651451 kernel: audit: type=2000 audit(1776128423.176:1): state=initialized audit_enabled=0 res=1 Apr 14 01:00:31.651462 kernel: cpuidle: using governor menu Apr 14 01:00:31.651474 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 01:00:31.651486 kernel: dca service started, version 1.12.1 Apr 14 01:00:31.651497 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 01:00:31.651508 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 01:00:31.651522 kernel: PCI: Using configuration type 1 for base access Apr 14 01:00:31.651534 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 01:00:31.651546 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 01:00:31.651557 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 01:00:31.651568 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 01:00:31.651580 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 01:00:31.651591 kernel: ACPI: Added _OSI(Module Device) Apr 14 01:00:31.651603 kernel: ACPI: Added _OSI(Processor Device) Apr 14 01:00:31.651614 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 01:00:31.651628 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 01:00:31.651639 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 01:00:31.651650 kernel: ACPI: Interpreter enabled Apr 14 01:00:31.651661 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 01:00:31.651672 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 01:00:31.651682 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 01:00:31.651692 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 01:00:31.651704 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 01:00:31.651714 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 01:00:31.652765 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 01:00:31.652936 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 01:00:31.653042 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 01:00:31.653057 kernel: PCI host bridge to bus 0000:00 Apr 14 01:00:31.653214 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 01:00:31.653308 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 01:00:31.653464 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 01:00:31.653553 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 01:00:31.657706 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 01:00:31.657935 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 01:00:31.658111 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 01:00:31.668689 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 01:00:31.671825 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 01:00:31.672131 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 01:00:31.672239 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 01:00:31.672341 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 01:00:31.673256 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 01:00:31.673627 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 01:00:31.673734 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 01:00:31.673877 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 01:00:31.673993 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 01:00:31.674131 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 01:00:31.674229 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 01:00:31.674322 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 01:00:31.675630 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 01:00:31.676071 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 01:00:31.676198 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 01:00:31.676784 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 01:00:31.678167 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 01:00:31.683724 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 01:00:31.686650 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 01:00:31.687734 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 01:00:31.688990 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 01:00:31.689268 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 01:00:31.689543 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 01:00:31.689706 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 01:00:31.689819 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 01:00:31.689834 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 01:00:31.689846 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 01:00:31.689880 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 01:00:31.689889 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 01:00:31.689905 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 01:00:31.689916 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 01:00:31.689926 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 01:00:31.689937 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 01:00:31.689949 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 01:00:31.689961 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 01:00:31.689972 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 01:00:31.689983 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 01:00:31.689995 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 01:00:31.690009 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 01:00:31.690021 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 01:00:31.690032 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 01:00:31.690043 kernel: iommu: Default domain type: Translated Apr 14 01:00:31.690053 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 01:00:31.690063 kernel: PCI: Using ACPI for IRQ routing Apr 14 01:00:31.690073 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 01:00:31.690083 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 01:00:31.690094 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 01:00:31.690221 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 01:00:31.690329 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 01:00:31.690516 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 01:00:31.690534 kernel: vgaarb: loaded Apr 14 01:00:31.690545 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 01:00:31.690557 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 01:00:31.690567 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 01:00:31.690578 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 01:00:31.690595 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 01:00:31.690606 kernel: pnp: PnP ACPI init Apr 14 01:00:31.690806 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 01:00:31.690825 kernel: pnp: PnP ACPI: found 6 devices Apr 14 01:00:31.690835 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 01:00:31.690846 kernel: NET: Registered PF_INET protocol family Apr 14 01:00:31.690879 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 01:00:31.690890 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 01:00:31.690905 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 01:00:31.690915 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 01:00:31.690925 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 01:00:31.690936 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 01:00:31.690946 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 01:00:31.690956 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 01:00:31.690966 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 01:00:31.690976 kernel: NET: Registered PF_XDP protocol family Apr 14 01:00:31.691116 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 01:00:31.691219 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 01:00:31.691312 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 01:00:31.691459 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 01:00:31.691555 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 01:00:31.692338 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 01:00:31.692568 kernel: PCI: CLS 0 bytes, default 64 Apr 14 01:00:31.692582 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 01:00:31.692593 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 01:00:31.692612 kernel: Initialise system trusted keyrings Apr 14 01:00:31.693480 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 01:00:31.693657 kernel: Key type asymmetric registered Apr 14 01:00:31.693669 kernel: Asymmetric key parser 'x509' registered Apr 14 01:00:31.693679 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 01:00:31.693689 kernel: io scheduler mq-deadline registered Apr 14 01:00:31.693695 kernel: io scheduler kyber registered Apr 14 01:00:31.693700 kernel: io scheduler bfq registered Apr 14 01:00:31.693706 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 01:00:31.693725 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 01:00:31.693732 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 01:00:31.693737 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 01:00:31.693743 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 01:00:31.693749 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 01:00:31.693756 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 01:00:31.693766 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 01:00:31.693777 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 01:00:31.694058 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 01:00:31.694180 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 01:00:31.694281 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T01:00:27 UTC (1776128427) Apr 14 01:00:31.694295 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 14 01:00:31.694353 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 01:00:31.694401 kernel: intel_pstate: CPU model not supported Apr 14 01:00:31.694411 kernel: NET: Registered PF_INET6 protocol family Apr 14 01:00:31.694421 kernel: Segment Routing with IPv6 Apr 14 01:00:31.694430 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 01:00:31.694445 kernel: NET: Registered PF_PACKET protocol family Apr 14 01:00:31.694456 kernel: Key type dns_resolver registered Apr 14 01:00:31.694465 kernel: IPI shorthand broadcast: enabled Apr 14 01:00:31.694475 kernel: sched_clock: Marking stable (4221119782, 696543667)->(5626053652, -708390203) Apr 14 01:00:31.694485 kernel: registered taskstats version 1 Apr 14 01:00:31.694495 kernel: Loading compiled-in X.509 certificates Apr 14 01:00:31.694505 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 01:00:31.694515 kernel: Key type .fscrypt registered Apr 14 01:00:31.694525 kernel: Key type fscrypt-provisioning registered Apr 14 01:00:31.694537 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 01:00:31.694547 kernel: ima: Allocated hash algorithm: sha1 Apr 14 01:00:31.694558 kernel: ima: No architecture policies found Apr 14 01:00:31.694567 kernel: clk: Disabling unused clocks Apr 14 01:00:31.694577 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 01:00:31.694588 kernel: Write protecting the kernel read-only data: 36864k Apr 14 01:00:31.694598 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 01:00:31.694608 kernel: Run /init as init process Apr 14 01:00:31.694618 kernel: with arguments: Apr 14 01:00:31.694628 kernel: /init Apr 14 01:00:31.694641 kernel: with environment: Apr 14 01:00:31.694651 kernel: HOME=/ Apr 14 01:00:31.694661 kernel: TERM=linux Apr 14 01:00:31.694671 kernel: hrtimer: interrupt took 6570764 ns Apr 14 01:00:31.694686 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 01:00:31.694700 systemd[1]: Detected virtualization kvm. Apr 14 01:00:31.694711 systemd[1]: Detected architecture x86-64. Apr 14 01:00:31.694723 systemd[1]: Running in initrd. Apr 14 01:00:31.694733 systemd[1]: No hostname configured, using default hostname. Apr 14 01:00:31.694743 systemd[1]: Hostname set to . Apr 14 01:00:31.694755 systemd[1]: Initializing machine ID from VM UUID. Apr 14 01:00:31.694766 systemd[1]: Queued start job for default target initrd.target. Apr 14 01:00:31.694776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 01:00:31.694788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 01:00:31.694799 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 01:00:31.694813 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 01:00:31.694825 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 01:00:31.694876 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 01:00:31.694894 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 01:00:31.694906 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 01:00:31.694920 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 01:00:31.694931 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 01:00:31.694942 systemd[1]: Reached target paths.target - Path Units. Apr 14 01:00:31.694953 systemd[1]: Reached target slices.target - Slice Units. Apr 14 01:00:31.694965 systemd[1]: Reached target swap.target - Swaps. Apr 14 01:00:31.694975 systemd[1]: Reached target timers.target - Timer Units. Apr 14 01:00:31.694986 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 01:00:31.694997 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 01:00:31.695012 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 01:00:31.695024 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 01:00:31.695035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 01:00:31.695046 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 01:00:31.695058 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 01:00:31.695069 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 01:00:31.695080 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 01:00:31.695091 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 01:00:31.695100 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 01:00:31.695115 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 01:00:31.695126 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 01:00:31.695138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 01:00:31.695149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:00:31.695160 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 01:00:31.695172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 01:00:31.695183 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 01:00:31.695198 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 01:00:31.695333 systemd-journald[193]: Collecting audit messages is disabled. Apr 14 01:00:31.696187 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 01:00:31.696263 systemd-journald[193]: Journal started Apr 14 01:00:31.696304 systemd-journald[193]: Runtime Journal (/run/log/journal/90be6d97ff2e4fe8a22e872de1e7d421) is 6.0M, max 48.4M, 42.3M free. Apr 14 01:00:31.671425 systemd-modules-load[196]: Inserted module 'overlay' Apr 14 01:00:31.974697 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 01:00:31.974749 kernel: Bridge firewalling registered Apr 14 01:00:31.974763 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 01:00:31.761111 systemd-modules-load[196]: Inserted module 'br_netfilter' Apr 14 01:00:32.012746 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 01:00:32.053249 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:00:32.108083 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 01:00:32.126170 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 01:00:32.144032 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 01:00:32.179450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 01:00:32.244294 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 01:00:32.320272 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:00:32.370907 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:00:32.497835 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 01:00:32.566919 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 01:00:32.642131 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 01:00:32.774760 dracut-cmdline[230]: dracut-dracut-053 Apr 14 01:00:32.876771 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 01:00:33.088281 systemd-resolved[232]: Positive Trust Anchors: Apr 14 01:00:33.088301 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 01:00:33.088337 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 01:00:33.101338 systemd-resolved[232]: Defaulting to hostname 'linux'. Apr 14 01:00:33.114456 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 01:00:33.129181 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 01:00:34.551465 kernel: SCSI subsystem initialized Apr 14 01:00:34.633648 kernel: Loading iSCSI transport class v2.0-870. Apr 14 01:00:34.896423 kernel: iscsi: registered transport (tcp) Apr 14 01:00:35.100270 kernel: iscsi: registered transport (qla4xxx) Apr 14 01:00:35.112163 kernel: QLogic iSCSI HBA Driver Apr 14 01:00:36.549469 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 01:00:36.840235 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 01:00:37.520707 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 01:00:37.527853 kernel: device-mapper: uevent: version 1.0.3 Apr 14 01:00:37.528054 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 01:00:38.800647 kernel: raid6: avx512x4 gen() 12350 MB/s Apr 14 01:00:38.824321 kernel: raid6: avx512x2 gen() 14794 MB/s Apr 14 01:00:38.844015 kernel: raid6: avx512x1 gen() 10617 MB/s Apr 14 01:00:38.865185 kernel: raid6: avx2x4 gen() 5009 MB/s Apr 14 01:00:38.884735 kernel: raid6: avx2x2 gen() 6505 MB/s Apr 14 01:00:38.906155 kernel: raid6: avx2x1 gen() 11169 MB/s Apr 14 01:00:38.908785 kernel: raid6: using algorithm avx512x2 gen() 14794 MB/s Apr 14 01:00:38.929516 kernel: raid6: .... xor() 4997 MB/s, rmw enabled Apr 14 01:00:38.931546 kernel: raid6: using avx512x2 recovery algorithm Apr 14 01:00:39.393010 kernel: xor: automatically using best checksumming function avx Apr 14 01:00:42.888186 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 01:00:43.649247 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 01:00:44.188218 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 01:00:45.187761 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 14 01:00:45.371552 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 01:00:45.454872 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 01:00:45.859448 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Apr 14 01:00:47.983867 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 01:00:48.154740 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 01:00:54.710342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 01:00:54.739593 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 01:00:55.045934 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 01:00:55.075655 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 01:00:55.082342 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 01:00:55.098746 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 01:00:55.134529 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 01:00:55.179858 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 01:00:55.271416 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 01:00:55.287286 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 01:00:55.344562 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 01:00:55.345839 kernel: GPT:9289727 != 19775487 Apr 14 01:00:55.345866 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 01:00:55.345981 kernel: GPT:9289727 != 19775487 Apr 14 01:00:55.345997 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 01:00:55.346010 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:00:55.346024 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 01:00:55.501043 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 01:00:55.508732 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:00:55.556488 kernel: libata version 3.00 loaded. Apr 14 01:00:55.589247 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 01:00:55.649078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 01:00:55.660996 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:00:55.694050 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:00:55.721240 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 01:00:55.730599 kernel: AES CTR mode by8 optimization enabled Apr 14 01:00:55.738250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:00:55.985220 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Apr 14 01:00:56.058850 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (460) Apr 14 01:00:56.068484 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 01:00:56.081454 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 01:00:56.099499 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 01:00:56.100134 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 01:00:56.106769 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 01:00:56.849478 kernel: scsi host0: ahci Apr 14 01:00:56.849819 kernel: scsi host1: ahci Apr 14 01:00:56.849987 kernel: scsi host2: ahci Apr 14 01:00:56.850122 kernel: scsi host3: ahci Apr 14 01:00:56.850243 kernel: scsi host4: ahci Apr 14 01:00:56.851096 kernel: scsi host5: ahci Apr 14 01:00:56.861341 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 01:00:56.862240 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 01:00:56.862278 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 01:00:56.862294 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 01:00:56.862309 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 01:00:56.862324 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 01:00:56.862339 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 01:00:56.862354 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 01:00:56.862426 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 01:00:56.862441 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 01:00:56.862482 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 01:00:56.862499 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 01:00:56.862515 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 01:00:56.862531 kernel: ata3.00: applying bridge limits Apr 14 01:00:56.862547 kernel: ata3.00: configured for UDMA/100 Apr 14 01:00:56.862561 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 01:00:56.891176 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 01:00:56.945590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:00:57.007684 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 01:00:57.012292 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 01:00:57.039812 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 01:00:57.074917 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 01:00:57.168197 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 01:00:57.211537 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 01:00:57.349213 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 01:00:57.397083 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 01:00:57.645628 disk-uuid[566]: Primary Header is updated. Apr 14 01:00:57.645628 disk-uuid[566]: Secondary Entries is updated. Apr 14 01:00:57.645628 disk-uuid[566]: Secondary Header is updated. Apr 14 01:00:57.683758 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:00:57.697413 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:00:57.755540 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:00:57.915161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:00:58.831897 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:00:58.842913 disk-uuid[573]: The operation has completed successfully. Apr 14 01:01:03.340059 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 01:01:03.340320 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 01:01:03.824912 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 01:01:06.329780 sh[592]: Success Apr 14 01:01:08.035765 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 01:01:10.960912 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 01:01:11.041769 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 01:01:11.402191 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 01:01:12.413960 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 01:01:12.439315 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:01:12.458495 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 01:01:12.458732 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 01:01:12.458748 kernel: BTRFS info (device dm-0): using free space tree Apr 14 01:01:13.038924 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 01:01:13.109610 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 01:01:13.181896 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 01:01:13.232598 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 01:01:14.044443 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:01:14.046480 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:01:14.046924 kernel: BTRFS info (device vda6): using free space tree Apr 14 01:01:14.133743 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 01:01:14.341124 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:01:14.345834 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 01:01:14.844803 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 01:01:15.013441 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 01:01:17.012329 ignition[698]: Ignition 2.19.0 Apr 14 01:01:17.015354 ignition[698]: Stage: fetch-offline Apr 14 01:01:17.017955 ignition[698]: no configs at "/usr/lib/ignition/base.d" Apr 14 01:01:17.017985 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:01:17.018170 ignition[698]: parsed url from cmdline: "" Apr 14 01:01:17.018173 ignition[698]: no config URL provided Apr 14 01:01:17.018178 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 01:01:17.018186 ignition[698]: no config at "/usr/lib/ignition/user.ign" Apr 14 01:01:17.019614 ignition[698]: op(1): [started] loading QEMU firmware config module Apr 14 01:01:17.019650 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 01:01:17.182663 ignition[698]: op(1): [finished] loading QEMU firmware config module Apr 14 01:01:17.812537 ignition[698]: parsing config with SHA512: f360f4052d09a239ed8856ff891965284ffa77c0859ac047d1d1dd99dfad5389e1be2a06bad50734a98aaf39e1ddde4279be9339b13957caefc4a3e4d29a8326 Apr 14 01:01:17.989157 unknown[698]: fetched base config from "system" Apr 14 01:01:17.989193 unknown[698]: fetched user config from "qemu" Apr 14 01:01:18.047609 ignition[698]: fetch-offline: fetch-offline passed Apr 14 01:01:18.050936 ignition[698]: Ignition finished successfully Apr 14 01:01:18.083519 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 01:01:23.084018 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 01:01:23.482771 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 01:01:31.192628 systemd-networkd[785]: lo: Link UP Apr 14 01:01:31.192637 systemd-networkd[785]: lo: Gained carrier Apr 14 01:01:31.513981 systemd-networkd[785]: Enumeration completed Apr 14 01:01:31.582818 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 01:01:31.609236 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:01:31.609243 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 01:01:31.723199 systemd-networkd[785]: eth0: Link UP Apr 14 01:01:31.723203 systemd-networkd[785]: eth0: Gained carrier Apr 14 01:01:31.723215 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:01:31.762683 systemd[1]: Reached target network.target - Network. Apr 14 01:01:31.841616 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 01:01:31.957505 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 01:01:31.957670 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 01:01:33.114550 systemd-networkd[785]: eth0: Gained IPv6LL Apr 14 01:01:33.471851 ignition[787]: Ignition 2.19.0 Apr 14 01:01:33.472185 ignition[787]: Stage: kargs Apr 14 01:01:33.571510 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 01:01:33.473036 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 14 01:01:33.473053 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:01:33.491808 ignition[787]: kargs: kargs passed Apr 14 01:01:33.524323 ignition[787]: Ignition finished successfully Apr 14 01:01:33.761662 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 01:01:38.313159 ignition[796]: Ignition 2.19.0 Apr 14 01:01:38.313413 ignition[796]: Stage: disks Apr 14 01:01:38.345596 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 14 01:01:38.345641 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:01:38.544382 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 01:01:38.396910 ignition[796]: disks: disks passed Apr 14 01:01:38.416113 ignition[796]: Ignition finished successfully Apr 14 01:01:38.697842 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 01:01:38.837473 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 01:01:38.898071 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 01:01:38.972060 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 01:01:39.064160 systemd[1]: Reached target basic.target - Basic System. Apr 14 01:01:39.503744 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 01:01:40.142304 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 01:01:40.396822 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 01:01:40.582469 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 01:01:42.674851 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 01:01:43.009870 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 01:01:43.111779 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 01:01:43.269703 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 01:01:43.389511 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 01:01:43.400512 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 01:01:43.400611 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 01:01:43.400647 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 01:01:43.552739 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 01:01:43.631854 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Apr 14 01:01:43.652977 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:01:43.656099 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:01:43.656130 kernel: BTRFS info (device vda6): using free space tree Apr 14 01:01:43.658062 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 01:01:43.801852 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 01:01:43.913188 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 01:01:46.040136 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 01:01:46.346905 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 14 01:01:46.682126 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 01:01:47.205873 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 01:02:04.323329 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 01:02:04.491056 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 01:02:04.543161 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 01:02:04.653418 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:02:04.682681 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 01:02:04.858538 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 01:02:05.926933 ignition[933]: INFO : Ignition 2.19.0 Apr 14 01:02:05.949729 ignition[933]: INFO : Stage: mount Apr 14 01:02:05.949729 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 01:02:05.949729 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:02:05.992034 ignition[933]: INFO : mount: mount passed Apr 14 01:02:06.006763 ignition[933]: INFO : Ignition finished successfully Apr 14 01:02:06.064322 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 01:02:06.129159 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 01:02:07.226855 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 01:02:07.556798 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Apr 14 01:02:07.611557 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:02:07.613054 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:02:07.622867 kernel: BTRFS info (device vda6): using free space tree Apr 14 01:02:07.661737 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 01:02:07.866334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 01:02:09.147930 ignition[962]: INFO : Ignition 2.19.0 Apr 14 01:02:09.147930 ignition[962]: INFO : Stage: files Apr 14 01:02:09.164228 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 01:02:09.174266 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:02:09.185203 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Apr 14 01:02:09.226033 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 01:02:09.246146 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 01:02:09.345085 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 01:02:09.365204 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 01:02:09.474951 unknown[962]: wrote ssh authorized keys file for user: core Apr 14 01:02:09.513184 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 01:02:09.780983 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 01:02:09.796793 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 01:02:10.467696 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 14 01:02:10.980924 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 01:02:10.980924 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 14 01:02:11.000592 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 01:02:11.000592 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 01:02:11.016857 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 01:02:11.032539 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 01:02:11.045434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 01:02:11.050403 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 01:02:11.056148 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 01:02:11.056148 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 01:02:11.075863 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 01:02:11.087670 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 01:02:11.087670 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 01:02:11.087670 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 01:02:11.121060 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 14 01:02:11.901163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 14 01:02:16.809562 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 01:02:16.819465 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 14 01:02:16.850756 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 01:02:16.947842 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 01:02:16.947842 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 14 01:02:16.947842 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 14 01:02:16.947842 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 01:02:16.998099 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 01:02:16.998099 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 14 01:02:16.998099 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 01:02:19.053953 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 01:02:19.335074 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 01:02:19.343840 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 01:02:19.343840 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 14 01:02:19.363790 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 01:02:19.363790 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 01:02:19.363790 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 01:02:19.363790 ignition[962]: INFO : files: files passed Apr 14 01:02:19.363790 ignition[962]: INFO : Ignition finished successfully Apr 14 01:02:19.379545 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 01:02:19.583809 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 01:02:19.654570 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 01:02:19.899007 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 01:02:19.899109 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 01:02:19.972872 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 01:02:20.048016 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 01:02:20.056264 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 01:02:20.062335 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 01:02:20.180628 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 01:02:20.202429 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 01:02:20.514082 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 01:02:21.611042 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 01:02:21.672927 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 01:02:21.703283 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 01:02:21.714817 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 01:02:21.767121 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 01:02:21.872778 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 01:02:22.611787 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 01:02:22.861691 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 01:02:23.211211 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 01:02:23.226030 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 01:02:23.290348 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 01:02:23.393113 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 01:02:23.444910 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 01:02:23.589024 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 01:02:23.668527 systemd[1]: Stopped target basic.target - Basic System. Apr 14 01:02:23.707978 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 01:02:23.747054 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 01:02:23.794020 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 01:02:23.852964 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 01:02:23.940769 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 01:02:23.954445 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 01:02:23.975259 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 01:02:23.995458 systemd[1]: Stopped target swap.target - Swaps. Apr 14 01:02:24.014589 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 01:02:24.023807 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 01:02:24.058517 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 01:02:24.080167 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 01:02:24.122835 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 01:02:24.123954 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 01:02:24.174816 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 01:02:24.175261 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 01:02:24.248086 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 01:02:24.255929 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 01:02:24.311979 systemd[1]: Stopped target paths.target - Path Units. Apr 14 01:02:24.312537 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 01:02:24.325514 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 01:02:24.525860 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 01:02:24.615188 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 01:02:24.690161 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 01:02:24.766048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 01:02:24.833136 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 01:02:24.881026 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 01:02:24.892037 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 01:02:24.912022 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 01:02:24.928250 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 01:02:24.952259 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 01:02:25.017733 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 01:02:25.054226 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 01:02:25.057734 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 01:02:25.057965 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 01:02:25.073973 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 01:02:25.076885 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 01:02:25.113001 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 01:02:25.113161 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 01:02:25.138088 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 01:02:25.147833 ignition[1017]: INFO : Ignition 2.19.0 Apr 14 01:02:25.147833 ignition[1017]: INFO : Stage: umount Apr 14 01:02:25.147833 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 01:02:25.147833 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:02:25.147833 ignition[1017]: INFO : umount: umount passed Apr 14 01:02:25.147833 ignition[1017]: INFO : Ignition finished successfully Apr 14 01:02:25.156784 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 01:02:25.162657 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 01:02:25.185750 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 01:02:25.192720 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 01:02:25.219846 systemd[1]: Stopped target network.target - Network. Apr 14 01:02:25.227891 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 01:02:25.238232 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 01:02:25.253673 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 01:02:25.254011 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 01:02:25.302259 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 01:02:25.302644 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 01:02:25.353892 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 01:02:25.354048 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 01:02:25.407151 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 01:02:25.407649 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 01:02:25.452462 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 01:02:25.519549 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 01:02:25.646929 systemd-networkd[785]: eth0: DHCPv6 lease lost Apr 14 01:02:25.651268 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 01:02:25.671068 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 01:02:25.687986 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 01:02:25.688265 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 01:02:25.733903 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 01:02:25.739774 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 01:02:25.799074 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 01:02:25.824245 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 01:02:25.828151 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 01:02:25.853681 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 01:02:25.864914 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:02:25.877280 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 01:02:25.877481 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 01:02:25.889491 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 01:02:25.889632 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 01:02:25.912872 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 01:02:26.006949 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 01:02:26.008214 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 01:02:26.168984 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 01:02:26.200298 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 01:02:26.272573 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 01:02:26.275414 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 01:02:26.307441 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 01:02:26.307590 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 01:02:26.317633 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 01:02:26.330263 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 01:02:26.376470 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 01:02:26.376703 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 01:02:26.415908 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 01:02:26.459442 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:02:26.693125 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 01:02:26.718444 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 01:02:26.719013 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 01:02:26.744431 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 14 01:02:26.753055 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 01:02:26.768180 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 01:02:26.769723 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 01:02:26.804232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 01:02:26.806898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:02:27.210635 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 01:02:27.255945 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 01:02:27.287186 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 01:02:27.414912 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 01:02:28.519033 systemd[1]: Switching root. Apr 14 01:02:28.652686 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 14 01:02:28.653060 systemd-journald[193]: Journal stopped Apr 14 01:02:57.626548 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 01:02:57.626652 kernel: SELinux: policy capability open_perms=1 Apr 14 01:02:57.626669 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 01:02:57.626688 kernel: SELinux: policy capability always_check_network=0 Apr 14 01:02:57.626703 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 01:02:57.626717 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 01:02:57.626732 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 01:02:57.626753 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 01:02:57.626775 kernel: audit: type=1403 audit(1776128550.909:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 01:02:57.626792 systemd[1]: Successfully loaded SELinux policy in 744.787ms. Apr 14 01:02:57.626815 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 1.329893s. Apr 14 01:02:57.626832 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 01:02:57.626847 systemd[1]: Detected virtualization kvm. Apr 14 01:02:57.626862 systemd[1]: Detected architecture x86-64. Apr 14 01:02:57.626876 systemd[1]: Detected first boot. Apr 14 01:02:57.626892 systemd[1]: Initializing machine ID from VM UUID. Apr 14 01:02:57.626907 zram_generator::config[1061]: No configuration found. Apr 14 01:02:57.626941 systemd[1]: Populated /etc with preset unit settings. Apr 14 01:02:57.626957 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 14 01:02:57.626972 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 14 01:02:57.626987 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 14 01:02:57.627003 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 01:02:57.627018 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 01:02:57.627033 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 01:02:57.627048 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 01:02:57.627066 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 01:02:57.627080 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 01:02:57.627095 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 01:02:57.627110 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 01:02:57.627126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 01:02:57.627142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 01:02:57.627156 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 01:02:57.627170 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 01:02:57.627203 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 01:02:57.627224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 01:02:57.627239 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 01:02:57.627254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 01:02:57.627283 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 14 01:02:57.627298 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 14 01:02:57.627312 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 14 01:02:57.627324 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 01:02:57.627339 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 01:02:57.627351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 01:02:57.627853 systemd[1]: Reached target slices.target - Slice Units. Apr 14 01:02:57.627875 systemd[1]: Reached target swap.target - Swaps. Apr 14 01:02:57.627889 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 01:02:57.627907 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 01:02:57.627922 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 01:02:57.627955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 01:02:57.627967 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 01:02:57.627992 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 01:02:57.628005 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 01:02:57.628018 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 01:02:57.628031 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 01:02:57.628044 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:02:57.628056 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 01:02:57.628069 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 01:02:57.628081 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 01:02:57.628094 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 01:02:57.628119 systemd[1]: Reached target machines.target - Containers. Apr 14 01:02:57.628132 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 01:02:57.628145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 01:02:57.628159 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 01:02:57.628172 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 01:02:57.628185 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 01:02:57.628196 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 01:02:57.628209 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 01:02:57.628235 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 01:02:57.628265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 01:02:57.628280 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 01:02:57.628294 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 14 01:02:57.628307 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 14 01:02:57.628321 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 14 01:02:57.628335 systemd[1]: Stopped systemd-fsck-usr.service. Apr 14 01:02:57.628346 kernel: fuse: init (API version 7.39) Apr 14 01:02:57.628398 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 01:02:57.628430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 01:02:57.628443 kernel: loop: module loaded Apr 14 01:02:57.628612 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 01:02:57.628663 systemd-journald[1136]: Collecting audit messages is disabled. Apr 14 01:02:57.628698 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 01:02:57.628711 systemd-journald[1136]: Journal started Apr 14 01:02:57.631853 systemd-journald[1136]: Runtime Journal (/run/log/journal/90be6d97ff2e4fe8a22e872de1e7d421) is 6.0M, max 48.4M, 42.3M free. Apr 14 01:02:54.515004 systemd[1]: Queued start job for default target multi-user.target. Apr 14 01:02:54.868010 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 01:02:54.869649 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 14 01:02:54.871728 systemd[1]: systemd-journald.service: Consumed 2.645s CPU time. Apr 14 01:02:57.641406 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 01:02:57.649622 systemd[1]: verity-setup.service: Deactivated successfully. Apr 14 01:02:57.659302 systemd[1]: Stopped verity-setup.service. Apr 14 01:02:57.671873 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:02:57.681644 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 01:02:57.700860 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 01:02:57.714093 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 01:02:57.731967 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 01:02:57.734595 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 01:02:57.739752 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 01:02:57.742556 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 01:02:57.745054 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 01:02:57.775835 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 01:02:57.779257 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 01:02:57.779598 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 01:02:57.782744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 01:02:57.783123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 01:02:57.786445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 01:02:57.786746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 01:02:57.792722 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 01:02:57.793224 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 01:02:57.805895 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 01:02:57.806255 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 01:02:57.809524 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 01:02:57.812813 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 01:02:57.824240 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 01:02:57.859678 kernel: ACPI: bus type drm_connector registered Apr 14 01:02:57.873421 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 01:02:57.883412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 01:02:57.978592 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 01:02:58.011519 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 01:02:58.029600 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 01:02:58.040627 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 01:02:58.040691 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 01:02:58.051624 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 01:02:58.098661 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 01:02:58.135232 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 01:02:58.143190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 01:02:58.192927 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 01:02:58.239927 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 01:02:58.243757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 01:02:58.268957 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 01:02:58.298320 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 01:02:58.344278 systemd-journald[1136]: Time spent on flushing to /var/log/journal/90be6d97ff2e4fe8a22e872de1e7d421 is 56.073ms for 952 entries. Apr 14 01:02:58.344278 systemd-journald[1136]: System Journal (/var/log/journal/90be6d97ff2e4fe8a22e872de1e7d421) is 8.0M, max 195.6M, 187.6M free. Apr 14 01:02:58.644341 systemd-journald[1136]: Received client request to flush runtime journal. Apr 14 01:02:58.644434 kernel: loop0: detected capacity change from 0 to 142488 Apr 14 01:02:58.393230 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 01:02:58.497772 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 01:02:58.519309 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 01:02:58.535587 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 01:02:58.550190 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 01:02:58.580573 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 01:02:58.586275 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 01:02:58.599088 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 01:02:58.629304 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 01:02:58.680430 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 01:02:58.698893 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 01:02:58.711794 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 01:02:58.715805 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:02:58.735653 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Apr 14 01:02:58.736101 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Apr 14 01:02:58.763722 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 01:02:58.821776 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 01:02:58.815400 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 14 01:02:58.860615 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 01:02:58.904491 kernel: loop1: detected capacity change from 0 to 228704 Apr 14 01:02:59.085807 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 01:02:59.147325 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 01:02:59.349839 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Apr 14 01:02:59.357336 kernel: loop2: detected capacity change from 0 to 140768 Apr 14 01:02:59.352432 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Apr 14 01:02:59.456664 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 01:02:59.457582 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 01:02:59.476587 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 01:02:59.638486 kernel: loop3: detected capacity change from 0 to 142488 Apr 14 01:02:59.783646 kernel: loop4: detected capacity change from 0 to 228704 Apr 14 01:02:59.988251 kernel: loop5: detected capacity change from 0 to 140768 Apr 14 01:03:00.083110 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 01:03:00.085423 (sd-merge)[1203]: Merged extensions into '/usr'. Apr 14 01:03:00.148643 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 01:03:00.148683 systemd[1]: Reloading... Apr 14 01:03:00.761896 zram_generator::config[1232]: No configuration found. Apr 14 01:03:01.385969 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:03:01.445961 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 01:03:01.672752 systemd[1]: Reloading finished in 1509 ms. Apr 14 01:03:01.906953 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 01:03:01.910131 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 01:03:01.986152 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 01:03:02.195919 systemd[1]: Starting ensure-sysext.service... Apr 14 01:03:02.234296 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 01:03:02.309088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 01:03:02.335940 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Apr 14 01:03:02.335956 systemd[1]: Reloading... Apr 14 01:03:02.473833 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 01:03:02.474236 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 01:03:02.478213 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 01:03:02.481244 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Apr 14 01:03:02.481528 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Apr 14 01:03:02.520741 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 01:03:02.523748 systemd-tmpfiles[1268]: Skipping /boot Apr 14 01:03:02.524849 systemd-udevd[1269]: Using default interface naming scheme 'v255'. Apr 14 01:03:02.637385 zram_generator::config[1291]: No configuration found. Apr 14 01:03:02.645123 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 01:03:02.645140 systemd-tmpfiles[1268]: Skipping /boot Apr 14 01:03:03.382900 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1324) Apr 14 01:03:03.529973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:03:03.698047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 01:03:03.711763 systemd[1]: Reloading finished in 1375 ms. Apr 14 01:03:03.733579 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 14 01:03:03.743740 kernel: ACPI: button: Power Button [PWRF] Apr 14 01:03:03.792606 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 01:03:03.872780 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 01:03:04.127723 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 01:03:04.160142 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 01:03:04.224791 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 01:03:04.133804 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 14 01:03:04.150988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:03:04.218030 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 01:03:04.244786 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 01:03:04.280078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 01:03:04.364787 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 01:03:04.385298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 01:03:04.428010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 01:03:04.430743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 01:03:04.432642 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 01:03:04.451310 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 01:03:04.479211 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 01:03:04.495142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 01:03:04.549938 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 01:03:04.564512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:03:04.569082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 01:03:04.569279 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 01:03:04.589609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 01:03:04.590128 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 01:03:04.625457 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 14 01:03:04.656692 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 01:03:04.657032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 01:03:04.768303 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:03:04.770205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 01:03:04.817024 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 01:03:04.874007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 01:03:04.973903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 01:03:04.989961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 01:03:05.032286 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 01:03:05.064075 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:03:05.068025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 01:03:05.078143 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 01:03:05.094181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 01:03:05.094558 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 01:03:05.156731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 01:03:05.156972 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 01:03:05.193918 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 01:03:05.218150 augenrules[1395]: No rules Apr 14 01:03:05.194159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 01:03:05.218644 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 01:03:05.420774 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 01:03:05.443722 systemd[1]: Finished ensure-sysext.service. Apr 14 01:03:05.601952 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:03:05.603100 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 01:03:05.638120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 01:03:05.675878 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 01:03:05.713987 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 01:03:05.763061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 01:03:05.774195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 01:03:05.792772 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 01:03:05.860756 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 01:03:05.960704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:03:05.967218 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:03:05.979235 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 01:03:05.998965 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 01:03:06.014073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 01:03:06.014671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 01:03:06.045127 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 01:03:06.045329 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 01:03:06.095703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 01:03:06.096187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 01:03:06.115540 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 01:03:06.115754 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 01:03:06.128779 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 01:03:06.148233 systemd-networkd[1376]: lo: Link UP Apr 14 01:03:06.148245 systemd-networkd[1376]: lo: Gained carrier Apr 14 01:03:06.150180 systemd-networkd[1376]: Enumeration completed Apr 14 01:03:06.167858 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 01:03:06.207147 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:03:06.207183 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 01:03:06.227989 systemd-networkd[1376]: eth0: Link UP Apr 14 01:03:06.233467 systemd-networkd[1376]: eth0: Gained carrier Apr 14 01:03:06.233655 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:03:06.298786 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 01:03:06.307525 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 01:03:06.307905 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 01:03:06.308006 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 01:03:06.430701 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 01:03:06.553827 systemd-resolved[1377]: Positive Trust Anchors: Apr 14 01:03:06.555410 systemd-resolved[1377]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 01:03:06.555502 systemd-resolved[1377]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 01:03:06.581040 systemd-resolved[1377]: Defaulting to hostname 'linux'. Apr 14 01:03:06.585838 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 01:03:06.586066 systemd[1]: Reached target network.target - Network. Apr 14 01:03:06.586108 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 01:03:06.647586 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 01:03:06.648070 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 01:03:07.348256 systemd-resolved[1377]: Clock change detected. Flushing caches. Apr 14 01:03:07.348309 systemd-timesyncd[1418]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 01:03:07.348376 systemd-timesyncd[1418]: Initial clock synchronization to Tue 2026-04-14 01:03:07.347380 UTC. Apr 14 01:03:08.113216 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:03:08.500294 systemd-networkd[1376]: eth0: Gained IPv6LL Apr 14 01:03:08.551993 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 01:03:08.646105 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 01:03:08.696268 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 01:03:09.697682 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 01:03:09.763500 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 01:03:09.830548 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 01:03:09.949728 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 01:03:09.989509 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 01:03:10.017218 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 01:03:10.020518 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 01:03:10.052609 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 01:03:10.094264 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 01:03:10.109795 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 01:03:10.114733 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 01:03:10.144534 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 01:03:10.144791 systemd[1]: Reached target paths.target - Path Units. Apr 14 01:03:10.149312 systemd[1]: Reached target timers.target - Timer Units. Apr 14 01:03:10.173110 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 01:03:10.248322 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 01:03:10.317576 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 01:03:10.360709 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 01:03:10.369984 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 01:03:10.379975 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 01:03:10.384118 systemd[1]: Reached target basic.target - Basic System. Apr 14 01:03:10.400587 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 01:03:10.399861 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 01:03:10.400046 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 01:03:10.429451 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 01:03:10.457884 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 01:03:10.481480 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 01:03:10.499183 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 01:03:10.555706 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 01:03:10.587897 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 01:03:10.613298 dbus-daemon[1446]: [system] SELinux support is enabled Apr 14 01:03:10.682586 jq[1447]: false Apr 14 01:03:10.695038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:03:10.710045 extend-filesystems[1448]: Found loop3 Apr 14 01:03:10.731290 extend-filesystems[1448]: Found loop4 Apr 14 01:03:10.731290 extend-filesystems[1448]: Found loop5 Apr 14 01:03:10.731290 extend-filesystems[1448]: Found sr0 Apr 14 01:03:10.731290 extend-filesystems[1448]: Found vda Apr 14 01:03:10.731290 extend-filesystems[1448]: Found vda1 Apr 14 01:03:10.731290 extend-filesystems[1448]: Found vda2 Apr 14 01:03:10.731290 extend-filesystems[1448]: Found vda3 Apr 14 01:03:10.731290 extend-filesystems[1448]: Found usr Apr 14 01:03:10.731290 extend-filesystems[1448]: Found vda4 Apr 14 01:03:10.731290 extend-filesystems[1448]: Found vda6 Apr 14 01:03:10.731290 extend-filesystems[1448]: Found vda7 Apr 14 01:03:10.731290 extend-filesystems[1448]: Found vda9 Apr 14 01:03:10.731290 extend-filesystems[1448]: Checking size of /dev/vda9 Apr 14 01:03:10.982828 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1330) Apr 14 01:03:10.791007 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 01:03:10.983264 extend-filesystems[1448]: Resized partition /dev/vda9 Apr 14 01:03:10.974848 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 01:03:10.987340 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Apr 14 01:03:11.014328 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 01:03:11.039748 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 01:03:11.105715 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 01:03:11.134199 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 01:03:11.150838 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 01:03:11.167058 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 01:03:11.167894 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 01:03:11.182140 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 01:03:11.237293 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 01:03:11.252462 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 01:03:11.306031 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 01:03:11.348991 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 01:03:11.367354 jq[1475]: true Apr 14 01:03:11.381142 update_engine[1473]: I20260414 01:03:11.368126 1473 main.cc:92] Flatcar Update Engine starting Apr 14 01:03:11.390664 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 01:03:11.943109 update_engine[1473]: I20260414 01:03:11.382714 1473 update_check_scheduler.cc:74] Next update check in 4m17s Apr 14 01:03:11.390908 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 01:03:11.434711 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 01:03:11.949569 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 01:03:11.949569 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 01:03:11.949569 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 01:03:11.999330 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 01:03:11.436972 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 01:03:12.011780 extend-filesystems[1448]: Resized filesystem in /dev/vda9 Apr 14 01:03:11.481177 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 01:03:12.025742 tar[1481]: linux-amd64/LICENSE Apr 14 01:03:12.025742 tar[1481]: linux-amd64/helm Apr 14 01:03:11.549683 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 01:03:12.030337 jq[1482]: true Apr 14 01:03:11.549971 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 01:03:11.626774 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 01:03:11.668701 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 01:03:11.668914 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 01:03:11.813190 systemd[1]: Started update-engine.service - Update Engine. Apr 14 01:03:11.830530 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 01:03:11.830768 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 01:03:11.830800 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 01:03:11.834906 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 01:03:11.839342 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 01:03:11.878379 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 01:03:11.956314 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 01:03:11.956572 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 01:03:11.958215 systemd-logind[1472]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 01:03:11.958236 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 01:03:11.981807 systemd-logind[1472]: New seat seat0. Apr 14 01:03:12.017984 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 01:03:12.025878 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 01:03:12.303408 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Apr 14 01:03:12.304489 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 01:03:12.310910 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 01:03:12.460242 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 01:03:12.474633 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 01:03:12.572583 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 01:03:12.577514 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 01:03:12.621305 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 01:03:12.691491 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 01:03:12.785045 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 01:03:12.800719 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 01:03:12.812210 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 01:03:13.184315 containerd[1483]: time="2026-04-14T01:03:13.183620290Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 01:03:13.308501 containerd[1483]: time="2026-04-14T01:03:13.307426575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:03:13.318341 containerd[1483]: time="2026-04-14T01:03:13.318268568Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.318490333Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.318527119Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.318892997Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.319021056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.319173223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.319196113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.319441333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.319464361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.319483924Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.319496581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.319583093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:03:13.321240 containerd[1483]: time="2026-04-14T01:03:13.320003015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:03:13.354306 containerd[1483]: time="2026-04-14T01:03:13.320202557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:03:13.354306 containerd[1483]: time="2026-04-14T01:03:13.320229653Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 01:03:13.354306 containerd[1483]: time="2026-04-14T01:03:13.320344889Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 01:03:13.354306 containerd[1483]: time="2026-04-14T01:03:13.320419042Z" level=info msg="metadata content store policy set" policy=shared Apr 14 01:03:13.488480 containerd[1483]: time="2026-04-14T01:03:13.487498511Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 01:03:13.488480 containerd[1483]: time="2026-04-14T01:03:13.487603997Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 01:03:13.488480 containerd[1483]: time="2026-04-14T01:03:13.487626049Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 01:03:13.488480 containerd[1483]: time="2026-04-14T01:03:13.487645270Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 01:03:13.488480 containerd[1483]: time="2026-04-14T01:03:13.487664941Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 01:03:13.488480 containerd[1483]: time="2026-04-14T01:03:13.488397200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.488696900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.488815716Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.488831872Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.488845157Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.488859669Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.488874144Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.488887862Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.488903949Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.488920082Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.489101711Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.489120770Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.489138556Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.489178552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490394 containerd[1483]: time="2026-04-14T01:03:13.489199209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489213884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489229629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489243691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489270700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489287059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489306302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489321605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489339828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489353491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489367819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489383942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489400354Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489425744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489439901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.490992 containerd[1483]: time="2026-04-14T01:03:13.489453813Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 01:03:13.491632 containerd[1483]: time="2026-04-14T01:03:13.489550068Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 01:03:13.491632 containerd[1483]: time="2026-04-14T01:03:13.489683424Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 01:03:13.491632 containerd[1483]: time="2026-04-14T01:03:13.489701865Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 01:03:13.491632 containerd[1483]: time="2026-04-14T01:03:13.489717017Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 01:03:13.491632 containerd[1483]: time="2026-04-14T01:03:13.489729100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.491632 containerd[1483]: time="2026-04-14T01:03:13.489745497Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 01:03:13.491632 containerd[1483]: time="2026-04-14T01:03:13.489767198Z" level=info msg="NRI interface is disabled by configuration." Apr 14 01:03:13.491632 containerd[1483]: time="2026-04-14T01:03:13.489780068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 01:03:13.503902 containerd[1483]: time="2026-04-14T01:03:13.493467643Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 01:03:13.503902 containerd[1483]: time="2026-04-14T01:03:13.498595831Z" level=info msg="Connect containerd service" Apr 14 01:03:13.503902 containerd[1483]: time="2026-04-14T01:03:13.498804295Z" level=info msg="using legacy CRI server" Apr 14 01:03:13.503902 containerd[1483]: time="2026-04-14T01:03:13.498901355Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 01:03:13.503902 containerd[1483]: time="2026-04-14T01:03:13.499765705Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 01:03:13.510430 containerd[1483]: time="2026-04-14T01:03:13.504526518Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 01:03:13.510430 containerd[1483]: time="2026-04-14T01:03:13.504726502Z" level=info msg="Start subscribing containerd event" Apr 14 01:03:13.510430 containerd[1483]: time="2026-04-14T01:03:13.504791547Z" level=info msg="Start recovering state" Apr 14 01:03:13.510430 containerd[1483]: time="2026-04-14T01:03:13.504878175Z" level=info msg="Start event monitor" Apr 14 01:03:13.510430 containerd[1483]: time="2026-04-14T01:03:13.504898041Z" level=info msg="Start snapshots syncer" Apr 14 01:03:13.510430 containerd[1483]: time="2026-04-14T01:03:13.504911945Z" level=info msg="Start cni network conf syncer for default" Apr 14 01:03:13.510430 containerd[1483]: time="2026-04-14T01:03:13.504921516Z" level=info msg="Start streaming server" Apr 14 01:03:13.510430 containerd[1483]: time="2026-04-14T01:03:13.505622208Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 01:03:13.510430 containerd[1483]: time="2026-04-14T01:03:13.505683279Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 01:03:13.510430 containerd[1483]: time="2026-04-14T01:03:13.505745235Z" level=info msg="containerd successfully booted in 0.336903s" Apr 14 01:03:13.514468 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 01:03:15.086861 tar[1481]: linux-amd64/README.md Apr 14 01:03:15.185055 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 01:03:16.790447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:03:16.824789 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 01:03:16.835992 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:03:16.838330 systemd[1]: Startup finished in 5.052s (kernel) + 2min 1.773s (initrd) + 45.895s (userspace) = 2min 52.721s. Apr 14 01:03:18.100972 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 01:03:18.183680 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:50754.service - OpenSSH per-connection server daemon (10.0.0.1:50754). Apr 14 01:03:19.864227 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 50754 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:03:19.890290 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:03:20.128574 systemd-logind[1472]: New session 1 of user core. Apr 14 01:03:20.135226 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 01:03:20.168888 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 01:03:20.259709 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 01:03:20.370155 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 01:03:20.408781 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 01:03:21.960803 systemd[1572]: Queued start job for default target default.target. Apr 14 01:03:21.987518 systemd[1572]: Created slice app.slice - User Application Slice. Apr 14 01:03:21.988637 systemd[1572]: Reached target paths.target - Paths. Apr 14 01:03:21.991903 systemd[1572]: Reached target timers.target - Timers. Apr 14 01:03:22.035593 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 01:03:22.229631 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 01:03:22.230730 systemd[1572]: Reached target sockets.target - Sockets. Apr 14 01:03:22.230756 systemd[1572]: Reached target basic.target - Basic System. Apr 14 01:03:22.230826 systemd[1572]: Reached target default.target - Main User Target. Apr 14 01:03:22.230862 systemd[1572]: Startup finished in 1.751s. Apr 14 01:03:22.266011 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 01:03:22.318761 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 01:03:22.583588 kubelet[1557]: E0414 01:03:22.580106 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:03:22.626565 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:50766.service - OpenSSH per-connection server daemon (10.0.0.1:50766). Apr 14 01:03:22.641345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:03:22.641529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:03:22.642432 systemd[1]: kubelet.service: Consumed 2.256s CPU time. Apr 14 01:03:22.810792 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 50766 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:03:22.813755 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:03:22.905030 systemd-logind[1472]: New session 2 of user core. Apr 14 01:03:22.944632 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 01:03:23.187410 sshd[1585]: pam_unix(sshd:session): session closed for user core Apr 14 01:03:23.257465 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:50766.service: Deactivated successfully. Apr 14 01:03:23.277917 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 01:03:23.322701 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Apr 14 01:03:23.339024 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:50780.service - OpenSSH per-connection server daemon (10.0.0.1:50780). Apr 14 01:03:23.394547 systemd-logind[1472]: Removed session 2. Apr 14 01:03:23.935058 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 50780 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:03:23.943516 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:03:24.041761 systemd-logind[1472]: New session 3 of user core. Apr 14 01:03:24.090628 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 01:03:24.351788 sshd[1593]: pam_unix(sshd:session): session closed for user core Apr 14 01:03:24.450733 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:50780.service: Deactivated successfully. Apr 14 01:03:24.472837 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 01:03:24.516913 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Apr 14 01:03:24.558514 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:50784.service - OpenSSH per-connection server daemon (10.0.0.1:50784). Apr 14 01:03:24.591802 systemd-logind[1472]: Removed session 3. Apr 14 01:03:24.794663 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 50784 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:03:24.810541 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:03:24.826525 systemd-logind[1472]: New session 4 of user core. Apr 14 01:03:24.853248 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 01:03:25.543442 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 14 01:03:25.609683 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:50784.service: Deactivated successfully. Apr 14 01:03:25.760221 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 01:03:25.790453 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Apr 14 01:03:25.825914 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:57038.service - OpenSSH per-connection server daemon (10.0.0.1:57038). Apr 14 01:03:25.841981 systemd-logind[1472]: Removed session 4. Apr 14 01:03:26.628833 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 57038 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:03:26.633594 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:03:26.665841 systemd-logind[1472]: New session 5 of user core. Apr 14 01:03:26.697449 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 01:03:27.156826 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 14 01:03:27.158550 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:03:27.237444 sudo[1610]: pam_unix(sudo:session): session closed for user root Apr 14 01:03:27.316268 sshd[1607]: pam_unix(sshd:session): session closed for user core Apr 14 01:03:27.427365 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:57038.service: Deactivated successfully. Apr 14 01:03:27.514343 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 01:03:27.579712 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Apr 14 01:03:27.604063 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:57048.service - OpenSSH per-connection server daemon (10.0.0.1:57048). Apr 14 01:03:27.625904 systemd-logind[1472]: Removed session 5. Apr 14 01:03:28.880398 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 57048 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:03:28.903315 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:03:29.052865 systemd-logind[1472]: New session 6 of user core. Apr 14 01:03:29.088691 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 01:03:29.497189 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 14 01:03:29.632413 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:03:29.767557 sudo[1619]: pam_unix(sudo:session): session closed for user root Apr 14 01:03:29.817099 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 14 01:03:29.818673 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:03:30.024675 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 14 01:03:30.071891 auditctl[1622]: No rules Apr 14 01:03:30.078899 systemd[1]: audit-rules.service: Deactivated successfully. Apr 14 01:03:30.105069 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 14 01:03:30.145660 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 01:03:31.113773 augenrules[1640]: No rules Apr 14 01:03:31.178242 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 01:03:31.186467 sudo[1618]: pam_unix(sudo:session): session closed for user root Apr 14 01:03:31.189978 sshd[1615]: pam_unix(sshd:session): session closed for user core Apr 14 01:03:31.350632 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:57048.service: Deactivated successfully. Apr 14 01:03:31.394104 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 01:03:31.469902 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Apr 14 01:03:31.582897 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:57052.service - OpenSSH per-connection server daemon (10.0.0.1:57052). Apr 14 01:03:31.598964 systemd-logind[1472]: Removed session 6. Apr 14 01:03:32.229998 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 57052 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:03:32.233827 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:03:32.400208 systemd-logind[1472]: New session 7 of user core. Apr 14 01:03:32.515757 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 01:03:32.673657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 01:03:32.800538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:03:33.060669 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 01:03:33.061118 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:03:34.799055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:03:34.804305 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:03:35.498866 kubelet[1670]: E0414 01:03:35.492915 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:03:35.575034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:03:35.575240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:03:36.593545 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 01:03:36.601374 (dockerd)[1685]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 01:03:40.508353 dockerd[1685]: time="2026-04-14T01:03:40.500990936Z" level=info msg="Starting up" Apr 14 01:03:42.612128 dockerd[1685]: time="2026-04-14T01:03:42.604436852Z" level=info msg="Loading containers: start." Apr 14 01:03:45.595828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 14 01:03:45.712356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:03:46.116639 kernel: Initializing XFRM netlink socket Apr 14 01:03:47.718801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:03:47.825252 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:03:48.489190 kubelet[1775]: E0414 01:03:48.465610 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:03:48.499572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:03:48.500716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:03:49.083595 systemd-networkd[1376]: docker0: Link UP Apr 14 01:03:49.995281 dockerd[1685]: time="2026-04-14T01:03:49.994260111Z" level=info msg="Loading containers: done." Apr 14 01:03:50.364583 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2280406152-merged.mount: Deactivated successfully. Apr 14 01:03:50.450210 dockerd[1685]: time="2026-04-14T01:03:50.449645788Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 01:03:50.450210 dockerd[1685]: time="2026-04-14T01:03:50.450125194Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 01:03:50.453878 dockerd[1685]: time="2026-04-14T01:03:50.452149061Z" level=info msg="Daemon has completed initialization" Apr 14 01:03:52.393432 dockerd[1685]: time="2026-04-14T01:03:52.387015320Z" level=info msg="API listen on /run/docker.sock" Apr 14 01:03:52.393348 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 01:03:57.144523 update_engine[1473]: I20260414 01:03:57.143397 1473 update_attempter.cc:509] Updating boot flags... Apr 14 01:03:57.700031 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1856) Apr 14 01:03:58.147110 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1858) Apr 14 01:03:58.642999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 14 01:03:58.773839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:04:00.730226 containerd[1483]: time="2026-04-14T01:04:00.725755214Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 14 01:04:01.035277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:04:01.103489 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:04:01.767383 kubelet[1872]: E0414 01:04:01.762535 1872 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:04:01.877151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:04:01.877321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:04:07.843739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749246048.mount: Deactivated successfully. Apr 14 01:04:12.593434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 14 01:04:12.710377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:04:15.757197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:04:15.853422 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:04:16.297068 kubelet[1905]: E0414 01:04:16.296169 1905 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:04:16.370310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:04:16.370846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:04:16.387221 systemd[1]: kubelet.service: Consumed 1.132s CPU time. Apr 14 01:04:26.611084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 14 01:04:26.655300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:04:27.666654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:04:27.794251 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:04:28.173294 kubelet[1965]: E0414 01:04:28.170905 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:04:28.200450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:04:28.200679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:04:38.380765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 14 01:04:38.565330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:04:40.395336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:04:40.409739 (kubelet)[1983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:04:41.024991 containerd[1483]: time="2026-04-14T01:04:41.024169332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:04:41.029244 containerd[1483]: time="2026-04-14T01:04:41.025515283Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 14 01:04:41.029632 containerd[1483]: time="2026-04-14T01:04:41.029531290Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:04:41.048677 containerd[1483]: time="2026-04-14T01:04:41.048614626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:04:41.056460 containerd[1483]: time="2026-04-14T01:04:41.049864480Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 40.323972297s" Apr 14 01:04:41.056460 containerd[1483]: time="2026-04-14T01:04:41.050144477Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 14 01:04:41.068863 kubelet[1983]: E0414 01:04:41.065977 1983 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:04:41.069419 containerd[1483]: time="2026-04-14T01:04:41.069383996Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 14 01:04:41.082218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:04:41.108474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:04:51.115658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 14 01:04:51.160715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:04:52.573511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:04:52.603601 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:04:52.953499 kubelet[2003]: E0414 01:04:52.950485 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:04:52.960613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:04:52.960785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:05:02.312043 containerd[1483]: time="2026-04-14T01:05:02.310752113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:05:02.473989 containerd[1483]: time="2026-04-14T01:05:02.323715729Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 14 01:05:02.473989 containerd[1483]: time="2026-04-14T01:05:02.455478609Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:05:02.848534 containerd[1483]: time="2026-04-14T01:05:02.837697384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:05:02.938466 containerd[1483]: time="2026-04-14T01:05:02.909673577Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 21.840236376s" Apr 14 01:05:02.938466 containerd[1483]: time="2026-04-14T01:05:02.909755875Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 14 01:05:02.968087 containerd[1483]: time="2026-04-14T01:05:02.956618572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 14 01:05:03.208598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 14 01:05:03.346559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:05:08.775013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:05:08.896125 (kubelet)[2021]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:05:09.433737 kubelet[2021]: E0414 01:05:09.425213 2021 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:05:09.519275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:05:09.519660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:05:09.520167 systemd[1]: kubelet.service: Consumed 1.372s CPU time. Apr 14 01:05:19.772164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 14 01:05:19.947025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:05:23.624318 containerd[1483]: time="2026-04-14T01:05:23.609742830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:05:23.647136 containerd[1483]: time="2026-04-14T01:05:23.646975361Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 14 01:05:23.671181 containerd[1483]: time="2026-04-14T01:05:23.670453624Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:05:23.717702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:05:23.791672 containerd[1483]: time="2026-04-14T01:05:23.790914512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:05:23.813774 containerd[1483]: time="2026-04-14T01:05:23.811798921Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 20.846621996s" Apr 14 01:05:23.814885 containerd[1483]: time="2026-04-14T01:05:23.814192145Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 14 01:05:23.819710 containerd[1483]: time="2026-04-14T01:05:23.819013849Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 14 01:05:23.827133 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:05:25.105567 kubelet[2041]: E0414 01:05:25.105093 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:05:25.427158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:05:25.427388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:05:25.427722 systemd[1]: kubelet.service: Consumed 1.535s CPU time. Apr 14 01:05:35.614292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 14 01:05:35.644031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:05:36.733154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:05:37.040766 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:05:37.892710 kubelet[2063]: E0414 01:05:37.891791 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:05:38.057420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:05:38.058200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:05:38.072781 systemd[1]: kubelet.service: Consumed 1.140s CPU time. Apr 14 01:05:39.711789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382426384.mount: Deactivated successfully. Apr 14 01:05:44.777564 containerd[1483]: time="2026-04-14T01:05:44.776481806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:05:44.791606 containerd[1483]: time="2026-04-14T01:05:44.790746855Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 14 01:05:44.799067 containerd[1483]: time="2026-04-14T01:05:44.798855753Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:05:44.843496 containerd[1483]: time="2026-04-14T01:05:44.837857086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:05:44.844782 containerd[1483]: time="2026-04-14T01:05:44.844564691Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 21.02549185s" Apr 14 01:05:44.844782 containerd[1483]: time="2026-04-14T01:05:44.844630372Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 14 01:05:44.845536 containerd[1483]: time="2026-04-14T01:05:44.845510095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 14 01:05:48.144761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 14 01:05:48.259329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:05:49.501250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730773851.mount: Deactivated successfully. Apr 14 01:05:50.975281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:05:51.014503 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:05:51.368304 kubelet[2088]: E0414 01:05:51.365656 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:05:51.389286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:05:51.389447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:05:51.407823 systemd[1]: kubelet.service: Consumed 1.464s CPU time. Apr 14 01:06:01.635513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 14 01:06:01.744567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:06:02.804895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:06:02.856070 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:06:03.112737 kubelet[2154]: E0414 01:06:03.107573 2154 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:06:03.138531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:06:03.138725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:06:04.061137 containerd[1483]: time="2026-04-14T01:06:04.061030635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:06:04.069231 containerd[1483]: time="2026-04-14T01:06:04.068570644Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 14 01:06:04.078880 containerd[1483]: time="2026-04-14T01:06:04.078766413Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:06:04.097296 containerd[1483]: time="2026-04-14T01:06:04.096557186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:06:04.099991 containerd[1483]: time="2026-04-14T01:06:04.098351091Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 19.252708162s" Apr 14 01:06:04.099991 containerd[1483]: time="2026-04-14T01:06:04.098449238Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 14 01:06:04.118043 containerd[1483]: time="2026-04-14T01:06:04.117060999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 14 01:06:06.613378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494834195.mount: Deactivated successfully. Apr 14 01:06:06.958291 containerd[1483]: time="2026-04-14T01:06:06.956530455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:06:06.969921 containerd[1483]: time="2026-04-14T01:06:06.969012136Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 14 01:06:06.976269 containerd[1483]: time="2026-04-14T01:06:06.975248103Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:06:06.987554 containerd[1483]: time="2026-04-14T01:06:06.985474386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:06:06.987554 containerd[1483]: time="2026-04-14T01:06:06.986699579Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.860891187s" Apr 14 01:06:06.987554 containerd[1483]: time="2026-04-14T01:06:06.986741118Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 14 01:06:06.987554 containerd[1483]: time="2026-04-14T01:06:06.987765492Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 14 01:06:09.421840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586952500.mount: Deactivated successfully. Apr 14 01:06:13.368732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 14 01:06:13.510567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:06:14.300203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:06:14.315384 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:06:14.517425 kubelet[2228]: E0414 01:06:14.516231 2228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:06:14.534248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:06:14.534736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:06:15.985385 containerd[1483]: time="2026-04-14T01:06:15.984532997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:06:16.022017 containerd[1483]: time="2026-04-14T01:06:16.021236672Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 14 01:06:16.042747 containerd[1483]: time="2026-04-14T01:06:16.042073305Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:06:16.122585 containerd[1483]: time="2026-04-14T01:06:16.121123731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:06:16.137324 containerd[1483]: time="2026-04-14T01:06:16.135978503Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 9.148178288s" Apr 14 01:06:16.137324 containerd[1483]: time="2026-04-14T01:06:16.136123485Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 14 01:06:24.631072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 14 01:06:24.672255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:06:25.671646 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:06:25.673050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:06:25.979796 kubelet[2281]: E0414 01:06:25.977504 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:06:26.003126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:06:26.004310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:06:35.975078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:06:35.998571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:06:36.459459 systemd[1]: Reloading requested from client PID 2297 ('systemctl') (unit session-7.scope)... Apr 14 01:06:36.461317 systemd[1]: Reloading... Apr 14 01:06:36.943005 zram_generator::config[2339]: No configuration found. Apr 14 01:06:37.812388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:06:37.968747 systemd[1]: Reloading finished in 1490 ms. Apr 14 01:06:38.221227 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 14 01:06:38.221342 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 14 01:06:38.221700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:06:38.269290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:06:39.217015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:06:39.225900 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 01:06:39.423758 kubelet[2384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 01:06:39.423758 kubelet[2384]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 01:06:39.423758 kubelet[2384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 01:06:39.424910 kubelet[2384]: I0414 01:06:39.424807 2384 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 01:06:40.000274 kubelet[2384]: I0414 01:06:40.000173 2384 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 01:06:40.000274 kubelet[2384]: I0414 01:06:40.000226 2384 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 01:06:40.000796 kubelet[2384]: I0414 01:06:40.000737 2384 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 01:06:40.158229 kubelet[2384]: E0414 01:06:40.157820 2384 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 01:06:40.167810 kubelet[2384]: I0414 01:06:40.166526 2384 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 01:06:40.193128 kubelet[2384]: E0414 01:06:40.192846 2384 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 01:06:40.193128 kubelet[2384]: I0414 01:06:40.192954 2384 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 01:06:40.202857 kubelet[2384]: I0414 01:06:40.202648 2384 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 01:06:40.203282 kubelet[2384]: I0414 01:06:40.202974 2384 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 01:06:40.203282 kubelet[2384]: I0414 01:06:40.203004 2384 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 01:06:40.203282 kubelet[2384]: I0414 01:06:40.203172 2384 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 01:06:40.203282 kubelet[2384]: I0414 01:06:40.203182 2384 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 01:06:40.203501 kubelet[2384]: I0414 01:06:40.203358 2384 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:06:40.206865 kubelet[2384]: I0414 01:06:40.206814 2384 kubelet.go:480] "Attempting to sync node with API server" Apr 14 01:06:40.206865 kubelet[2384]: I0414 01:06:40.206840 2384 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 01:06:40.206865 kubelet[2384]: I0414 01:06:40.206865 2384 kubelet.go:386] "Adding apiserver pod source" Apr 14 01:06:40.206974 kubelet[2384]: I0414 01:06:40.206885 2384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 01:06:40.208495 kubelet[2384]: E0414 01:06:40.208444 2384 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 01:06:40.208765 kubelet[2384]: E0414 01:06:40.208722 2384 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 01:06:40.209370 kubelet[2384]: I0414 01:06:40.209339 2384 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 01:06:40.210072 kubelet[2384]: I0414 01:06:40.209946 2384 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 01:06:40.210537 kubelet[2384]: W0414 01:06:40.210499 2384 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 01:06:40.214654 kubelet[2384]: I0414 01:06:40.214620 2384 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 01:06:40.214703 kubelet[2384]: I0414 01:06:40.214691 2384 server.go:1289] "Started kubelet" Apr 14 01:06:40.217350 kubelet[2384]: I0414 01:06:40.217012 2384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 01:06:40.220970 kubelet[2384]: I0414 01:06:40.219366 2384 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 01:06:40.224663 kubelet[2384]: I0414 01:06:40.223080 2384 server.go:317] "Adding debug handlers to kubelet server" Apr 14 01:06:40.224663 kubelet[2384]: E0414 01:06:40.221382 2384 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a613bbc422826f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 01:06:40.214647407 +0000 UTC m=+0.937584326,LastTimestamp:2026-04-14 01:06:40.214647407 +0000 UTC m=+0.937584326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 01:06:40.225260 kubelet[2384]: I0414 01:06:40.225241 2384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 01:06:40.229183 kubelet[2384]: I0414 01:06:40.227021 2384 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 01:06:40.229183 kubelet[2384]: I0414 01:06:40.227611 2384 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 01:06:40.229183 kubelet[2384]: E0414 01:06:40.227678 2384 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:06:40.229183 kubelet[2384]: I0414 01:06:40.227705 2384 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 01:06:40.229183 kubelet[2384]: I0414 01:06:40.227881 2384 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 01:06:40.229728 kubelet[2384]: E0414 01:06:40.229687 2384 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 01:06:40.230355 kubelet[2384]: I0414 01:06:40.230304 2384 reconciler.go:26] "Reconciler: start to sync state" Apr 14 01:06:40.230519 kubelet[2384]: E0414 01:06:40.230426 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Apr 14 01:06:40.232579 kubelet[2384]: E0414 01:06:40.232555 2384 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 01:06:40.232740 kubelet[2384]: I0414 01:06:40.232597 2384 factory.go:223] Registration of the containerd container factory successfully Apr 14 01:06:40.233066 kubelet[2384]: I0414 01:06:40.233022 2384 factory.go:223] Registration of the systemd container factory successfully Apr 14 01:06:40.234398 kubelet[2384]: I0414 01:06:40.234174 2384 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 01:06:40.242897 kubelet[2384]: I0414 01:06:40.242816 2384 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 01:06:40.269347 kubelet[2384]: I0414 01:06:40.269167 2384 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 01:06:40.269347 kubelet[2384]: I0414 01:06:40.269190 2384 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 01:06:40.269347 kubelet[2384]: I0414 01:06:40.269205 2384 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:06:40.273731 kubelet[2384]: I0414 01:06:40.273649 2384 policy_none.go:49] "None policy: Start" Apr 14 01:06:40.273731 kubelet[2384]: I0414 01:06:40.273690 2384 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 01:06:40.273731 kubelet[2384]: I0414 01:06:40.273705 2384 state_mem.go:35] "Initializing new in-memory state store" Apr 14 01:06:40.282503 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 14 01:06:40.294175 kubelet[2384]: I0414 01:06:40.286613 2384 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 01:06:40.294175 kubelet[2384]: I0414 01:06:40.286661 2384 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 01:06:40.294175 kubelet[2384]: I0414 01:06:40.286693 2384 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 01:06:40.294175 kubelet[2384]: I0414 01:06:40.286699 2384 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 01:06:40.294175 kubelet[2384]: E0414 01:06:40.286766 2384 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 01:06:40.294175 kubelet[2384]: E0414 01:06:40.287338 2384 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 01:06:40.310961 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 14 01:06:40.330301 kubelet[2384]: E0414 01:06:40.330025 2384 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:06:40.357256 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 14 01:06:40.376971 kubelet[2384]: E0414 01:06:40.376881 2384 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 01:06:40.377472 kubelet[2384]: I0414 01:06:40.377429 2384 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 01:06:40.377522 kubelet[2384]: I0414 01:06:40.377464 2384 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 01:06:40.379700 kubelet[2384]: I0414 01:06:40.377841 2384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 01:06:40.383008 kubelet[2384]: E0414 01:06:40.382917 2384 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 01:06:40.383171 kubelet[2384]: E0414 01:06:40.383029 2384 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 01:06:40.417061 systemd[1]: Created slice kubepods-burstable-pod219e82b34589536605d26af4f7ee5cbb.slice - libcontainer container kubepods-burstable-pod219e82b34589536605d26af4f7ee5cbb.slice. Apr 14 01:06:40.433233 kubelet[2384]: I0414 01:06:40.432875 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:06:40.433233 kubelet[2384]: I0414 01:06:40.433027 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:06:40.433233 kubelet[2384]: I0414 01:06:40.433056 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/219e82b34589536605d26af4f7ee5cbb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"219e82b34589536605d26af4f7ee5cbb\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:06:40.433233 kubelet[2384]: I0414 01:06:40.433076 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/219e82b34589536605d26af4f7ee5cbb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"219e82b34589536605d26af4f7ee5cbb\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:06:40.433233 kubelet[2384]: I0414 01:06:40.433097 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:06:40.447444 kubelet[2384]: I0414 01:06:40.433122 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:06:40.447444 kubelet[2384]: I0414 01:06:40.433142 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:06:40.447444 kubelet[2384]: E0414 01:06:40.432973 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Apr 14 01:06:40.447444 kubelet[2384]: I0414 01:06:40.433188 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 01:06:40.447444 kubelet[2384]: I0414 01:06:40.433224 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/219e82b34589536605d26af4f7ee5cbb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"219e82b34589536605d26af4f7ee5cbb\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:06:40.510687 kubelet[2384]: E0414 01:06:40.509574 2384 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:06:40.512670 kubelet[2384]: I0414 01:06:40.512543 2384 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:06:40.517176 kubelet[2384]: E0414 01:06:40.517109 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 14 01:06:40.536007 systemd[1]: Created slice kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice - libcontainer container kubepods-burstable-podebf8e820819e4b80bc03d078b9ba80f5.slice. Apr 14 01:06:40.562250 kubelet[2384]: E0414 01:06:40.562155 2384 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:06:40.566686 kubelet[2384]: E0414 01:06:40.566147 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:40.571545 containerd[1483]: time="2026-04-14T01:06:40.571482770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 14 01:06:40.576901 systemd[1]: Created slice kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice - libcontainer container kubepods-burstable-pod39798d73a6894e44ae801eb773bf9a39.slice. Apr 14 01:06:40.579471 kubelet[2384]: E0414 01:06:40.579441 2384 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:06:40.579792 kubelet[2384]: E0414 01:06:40.579762 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:40.580520 containerd[1483]: time="2026-04-14T01:06:40.580220589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 14 01:06:40.731375 kubelet[2384]: I0414 01:06:40.731290 2384 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:06:40.733757 kubelet[2384]: E0414 01:06:40.732649 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 14 01:06:40.815155 kubelet[2384]: E0414 01:06:40.813915 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:40.817569 containerd[1483]: time="2026-04-14T01:06:40.817505728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:219e82b34589536605d26af4f7ee5cbb,Namespace:kube-system,Attempt:0,}" Apr 14 01:06:40.834773 kubelet[2384]: E0414 01:06:40.834583 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Apr 14 01:06:41.039618 kubelet[2384]: E0414 01:06:41.039059 2384 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 01:06:41.039618 kubelet[2384]: E0414 01:06:41.039541 2384 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 01:06:41.131419 kubelet[2384]: E0414 01:06:41.131083 2384 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 01:06:41.146997 kubelet[2384]: I0414 01:06:41.143236 2384 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:06:41.151791 kubelet[2384]: E0414 01:06:41.148483 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 14 01:06:41.218551 kubelet[2384]: E0414 01:06:41.218149 2384 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 01:06:41.218551 kubelet[2384]: E0414 01:06:41.218535 2384 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a613bbc422826f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 01:06:40.214647407 +0000 UTC m=+0.937584326,LastTimestamp:2026-04-14 01:06:40.214647407 +0000 UTC m=+0.937584326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 01:06:41.235238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2960333516.mount: Deactivated successfully. Apr 14 01:06:41.262674 containerd[1483]: time="2026-04-14T01:06:41.261728642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:06:41.268621 containerd[1483]: time="2026-04-14T01:06:41.267139527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 01:06:41.270371 containerd[1483]: time="2026-04-14T01:06:41.270110323Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:06:41.276835 containerd[1483]: time="2026-04-14T01:06:41.273810061Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:06:41.283227 containerd[1483]: time="2026-04-14T01:06:41.281726416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 01:06:41.283866 containerd[1483]: time="2026-04-14T01:06:41.283680566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 01:06:41.284181 containerd[1483]: time="2026-04-14T01:06:41.284149683Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:06:41.291324 containerd[1483]: time="2026-04-14T01:06:41.291140086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:06:41.292053 containerd[1483]: time="2026-04-14T01:06:41.292014285Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 711.715635ms" Apr 14 01:06:41.301201 containerd[1483]: time="2026-04-14T01:06:41.300699193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 483.082444ms" Apr 14 01:06:41.301887 containerd[1483]: time="2026-04-14T01:06:41.301561872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 729.980991ms" Apr 14 01:06:41.580876 containerd[1483]: time="2026-04-14T01:06:41.580357893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:06:41.580876 containerd[1483]: time="2026-04-14T01:06:41.580534105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:06:41.580876 containerd[1483]: time="2026-04-14T01:06:41.580558587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:06:41.581396 containerd[1483]: time="2026-04-14T01:06:41.580652264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:06:41.582663 containerd[1483]: time="2026-04-14T01:06:41.582585359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:06:41.582663 containerd[1483]: time="2026-04-14T01:06:41.582669334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:06:41.582663 containerd[1483]: time="2026-04-14T01:06:41.582690392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:06:41.583000 containerd[1483]: time="2026-04-14T01:06:41.582797327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:06:41.614513 containerd[1483]: time="2026-04-14T01:06:41.613744272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:06:41.614513 containerd[1483]: time="2026-04-14T01:06:41.614322101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:06:41.614513 containerd[1483]: time="2026-04-14T01:06:41.614343259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:06:41.614513 containerd[1483]: time="2026-04-14T01:06:41.614588691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:06:41.640091 kubelet[2384]: E0414 01:06:41.639647 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="1.6s" Apr 14 01:06:41.692820 systemd[1]: Started cri-containerd-2a0a4a7512d67c54722fffd646cd82dfaeaacbd9c89e4a28e641fd7909ba4b9c.scope - libcontainer container 2a0a4a7512d67c54722fffd646cd82dfaeaacbd9c89e4a28e641fd7909ba4b9c. Apr 14 01:06:41.702979 systemd[1]: Started cri-containerd-34ee3d32be5048f7fd8e498fd6d43d9c70e9b49a84bb44b3c261a31f487d2bf5.scope - libcontainer container 34ee3d32be5048f7fd8e498fd6d43d9c70e9b49a84bb44b3c261a31f487d2bf5. Apr 14 01:06:41.706694 systemd[1]: Started cri-containerd-4696e38f0207220df942765eb4f859eb06802ad36b8032b01cfced46ba32ed75.scope - libcontainer container 4696e38f0207220df942765eb4f859eb06802ad36b8032b01cfced46ba32ed75. Apr 14 01:06:41.796145 containerd[1483]: time="2026-04-14T01:06:41.795968714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:219e82b34589536605d26af4f7ee5cbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a0a4a7512d67c54722fffd646cd82dfaeaacbd9c89e4a28e641fd7909ba4b9c\"" Apr 14 01:06:41.797535 kubelet[2384]: E0414 01:06:41.797473 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:41.809155 containerd[1483]: time="2026-04-14T01:06:41.808842943Z" level=info msg="CreateContainer within sandbox \"2a0a4a7512d67c54722fffd646cd82dfaeaacbd9c89e4a28e641fd7909ba4b9c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 01:06:41.829327 containerd[1483]: time="2026-04-14T01:06:41.829201610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4696e38f0207220df942765eb4f859eb06802ad36b8032b01cfced46ba32ed75\"" Apr 14 01:06:41.843714 kubelet[2384]: E0414 01:06:41.840422 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:41.850006 containerd[1483]: time="2026-04-14T01:06:41.849971066Z" level=info msg="CreateContainer within sandbox \"4696e38f0207220df942765eb4f859eb06802ad36b8032b01cfced46ba32ed75\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 01:06:41.858003 containerd[1483]: time="2026-04-14T01:06:41.857744398Z" level=info msg="CreateContainer within sandbox \"2a0a4a7512d67c54722fffd646cd82dfaeaacbd9c89e4a28e641fd7909ba4b9c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54a46c23dd5f5aa5c888cd3642518a778c174fa584df4eec87e69e3499f2ac7b\"" Apr 14 01:06:41.858003 containerd[1483]: time="2026-04-14T01:06:41.858391919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"34ee3d32be5048f7fd8e498fd6d43d9c70e9b49a84bb44b3c261a31f487d2bf5\"" Apr 14 01:06:41.859104 kubelet[2384]: E0414 01:06:41.858983 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:41.859250 containerd[1483]: time="2026-04-14T01:06:41.859205899Z" level=info msg="StartContainer for \"54a46c23dd5f5aa5c888cd3642518a778c174fa584df4eec87e69e3499f2ac7b\"" Apr 14 01:06:41.882229 containerd[1483]: time="2026-04-14T01:06:41.882004582Z" level=info msg="CreateContainer within sandbox \"34ee3d32be5048f7fd8e498fd6d43d9c70e9b49a84bb44b3c261a31f487d2bf5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 01:06:41.886212 containerd[1483]: time="2026-04-14T01:06:41.886172746Z" level=info msg="CreateContainer within sandbox \"4696e38f0207220df942765eb4f859eb06802ad36b8032b01cfced46ba32ed75\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"44384c697c19da420ccf0c6fbf6abeb407b979ed8419fd83e9a607e5a8cce099\"" Apr 14 01:06:41.889101 containerd[1483]: time="2026-04-14T01:06:41.888894395Z" level=info msg="StartContainer for \"44384c697c19da420ccf0c6fbf6abeb407b979ed8419fd83e9a607e5a8cce099\"" Apr 14 01:06:41.904725 containerd[1483]: time="2026-04-14T01:06:41.903138704Z" level=info msg="CreateContainer within sandbox \"34ee3d32be5048f7fd8e498fd6d43d9c70e9b49a84bb44b3c261a31f487d2bf5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e754473ae8bb5853822c262ed7687c9d59a7bd4839669c0e97ad06bd6d6c771f\"" Apr 14 01:06:41.904725 containerd[1483]: time="2026-04-14T01:06:41.903860305Z" level=info msg="StartContainer for \"e754473ae8bb5853822c262ed7687c9d59a7bd4839669c0e97ad06bd6d6c771f\"" Apr 14 01:06:41.912030 systemd[1]: Started cri-containerd-54a46c23dd5f5aa5c888cd3642518a778c174fa584df4eec87e69e3499f2ac7b.scope - libcontainer container 54a46c23dd5f5aa5c888cd3642518a778c174fa584df4eec87e69e3499f2ac7b. Apr 14 01:06:41.956872 kubelet[2384]: I0414 01:06:41.956563 2384 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:06:41.958476 kubelet[2384]: E0414 01:06:41.958417 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 14 01:06:42.005877 systemd[1]: Started cri-containerd-44384c697c19da420ccf0c6fbf6abeb407b979ed8419fd83e9a607e5a8cce099.scope - libcontainer container 44384c697c19da420ccf0c6fbf6abeb407b979ed8419fd83e9a607e5a8cce099. Apr 14 01:06:42.027452 systemd[1]: Started cri-containerd-e754473ae8bb5853822c262ed7687c9d59a7bd4839669c0e97ad06bd6d6c771f.scope - libcontainer container e754473ae8bb5853822c262ed7687c9d59a7bd4839669c0e97ad06bd6d6c771f. Apr 14 01:06:42.063714 containerd[1483]: time="2026-04-14T01:06:42.063646500Z" level=info msg="StartContainer for \"54a46c23dd5f5aa5c888cd3642518a778c174fa584df4eec87e69e3499f2ac7b\" returns successfully" Apr 14 01:06:42.333427 kubelet[2384]: E0414 01:06:42.325511 2384 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 01:06:42.346773 containerd[1483]: time="2026-04-14T01:06:42.342873951Z" level=info msg="StartContainer for \"44384c697c19da420ccf0c6fbf6abeb407b979ed8419fd83e9a607e5a8cce099\" returns successfully" Apr 14 01:06:42.411507 containerd[1483]: time="2026-04-14T01:06:42.410575095Z" level=info msg="StartContainer for \"e754473ae8bb5853822c262ed7687c9d59a7bd4839669c0e97ad06bd6d6c771f\" returns successfully" Apr 14 01:06:42.438476 kubelet[2384]: E0414 01:06:42.438253 2384 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:06:42.438476 kubelet[2384]: E0414 01:06:42.438668 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:42.455082 kubelet[2384]: E0414 01:06:42.454817 2384 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:06:42.456043 kubelet[2384]: E0414 01:06:42.455977 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:43.477718 kubelet[2384]: E0414 01:06:43.476047 2384 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:06:43.477718 kubelet[2384]: E0414 01:06:43.476299 2384 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:06:43.477718 kubelet[2384]: E0414 01:06:43.476580 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:43.478529 kubelet[2384]: E0414 01:06:43.478476 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:43.577154 kubelet[2384]: I0414 01:06:43.574568 2384 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:06:44.523397 kubelet[2384]: E0414 01:06:44.521609 2384 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:06:44.523397 kubelet[2384]: E0414 01:06:44.522810 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:44.523397 kubelet[2384]: E0414 01:06:44.522863 2384 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:06:44.523397 kubelet[2384]: E0414 01:06:44.523460 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:44.813786 kubelet[2384]: E0414 01:06:44.809822 2384 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 01:06:44.889457 kubelet[2384]: I0414 01:06:44.887863 2384 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 01:06:44.889457 kubelet[2384]: E0414 01:06:44.887910 2384 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 01:06:44.937413 kubelet[2384]: I0414 01:06:44.932452 2384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:06:44.976561 kubelet[2384]: E0414 01:06:44.975712 2384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 14 01:06:44.976561 kubelet[2384]: I0414 01:06:44.975852 2384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:06:44.983389 kubelet[2384]: E0414 01:06:44.982409 2384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:06:44.983389 kubelet[2384]: I0414 01:06:44.982479 2384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:06:44.999674 kubelet[2384]: E0414 01:06:44.996024 2384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 01:06:45.225335 kubelet[2384]: I0414 01:06:45.220450 2384 apiserver.go:52] "Watching apiserver" Apr 14 01:06:45.331729 kubelet[2384]: I0414 01:06:45.329269 2384 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 01:06:45.525129 kubelet[2384]: I0414 01:06:45.521870 2384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:06:45.526033 kubelet[2384]: E0414 01:06:45.525812 2384 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 01:06:45.526243 kubelet[2384]: E0414 01:06:45.526184 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:48.319461 kubelet[2384]: I0414 01:06:48.316163 2384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:06:48.427409 kubelet[2384]: E0414 01:06:48.425726 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:48.608914 kubelet[2384]: I0414 01:06:48.570663 2384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:06:48.608914 kubelet[2384]: E0414 01:06:48.577782 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:48.697848 kubelet[2384]: E0414 01:06:48.697268 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:49.627683 kubelet[2384]: E0414 01:06:49.626192 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:50.423235 kubelet[2384]: I0414 01:06:50.418833 2384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.418806981 podStartE2EDuration="2.418806981s" podCreationTimestamp="2026-04-14 01:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:06:50.374489163 +0000 UTC m=+11.097426104" watchObservedRunningTime="2026-04-14 01:06:50.418806981 +0000 UTC m=+11.141743911" Apr 14 01:06:53.578334 kubelet[2384]: I0414 01:06:53.571779 2384 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:06:53.649671 kubelet[2384]: I0414 01:06:53.640837 2384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.6408150599999995 podStartE2EDuration="5.64081506s" podCreationTimestamp="2026-04-14 01:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:06:50.419216833 +0000 UTC m=+11.142153767" watchObservedRunningTime="2026-04-14 01:06:53.64081506 +0000 UTC m=+14.363752001" Apr 14 01:06:53.649671 kubelet[2384]: E0414 01:06:53.645986 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:53.682484 kubelet[2384]: E0414 01:06:53.682303 2384 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:06:53.725258 kubelet[2384]: I0414 01:06:53.719584 2384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.719550525 podStartE2EDuration="719.550525ms" podCreationTimestamp="2026-04-14 01:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:06:53.71279869 +0000 UTC m=+14.435735619" watchObservedRunningTime="2026-04-14 01:06:53.719550525 +0000 UTC m=+14.442487459" Apr 14 01:06:54.241205 systemd[1]: Reloading requested from client PID 2673 ('systemctl') (unit session-7.scope)... Apr 14 01:06:54.241235 systemd[1]: Reloading... Apr 14 01:06:55.031991 zram_generator::config[2711]: No configuration found. Apr 14 01:06:56.177851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:06:57.062132 systemd[1]: Reloading finished in 2818 ms. Apr 14 01:06:57.479864 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:06:57.543277 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 01:06:57.543670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:06:57.543992 systemd[1]: kubelet.service: Consumed 3.983s CPU time, 133.5M memory peak, 0B memory swap peak. Apr 14 01:06:57.562263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:06:58.736025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:06:58.777919 (kubelet)[2757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 01:06:59.230772 kubelet[2757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 01:06:59.230772 kubelet[2757]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 01:06:59.230772 kubelet[2757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 01:06:59.324191 kubelet[2757]: I0414 01:06:59.230881 2757 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 01:06:59.478141 kubelet[2757]: I0414 01:06:59.476485 2757 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 01:06:59.478141 kubelet[2757]: I0414 01:06:59.476972 2757 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 01:06:59.506480 kubelet[2757]: I0414 01:06:59.485210 2757 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 01:06:59.508169 kubelet[2757]: I0414 01:06:59.508130 2757 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 01:06:59.531795 kubelet[2757]: I0414 01:06:59.520401 2757 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 01:06:59.669994 kubelet[2757]: E0414 01:06:59.662324 2757 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 01:06:59.669994 kubelet[2757]: I0414 01:06:59.662389 2757 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 01:06:59.802501 kubelet[2757]: I0414 01:06:59.788696 2757 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 01:06:59.802501 kubelet[2757]: I0414 01:06:59.796710 2757 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 01:06:59.802501 kubelet[2757]: I0414 01:06:59.796763 2757 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 14 01:06:59.802501 kubelet[2757]: I0414 01:06:59.800558 2757 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 01:06:59.825919 kubelet[2757]: I0414 01:06:59.800589 2757 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 01:06:59.825919 kubelet[2757]: I0414 01:06:59.800905 2757 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:06:59.825919 kubelet[2757]: I0414 01:06:59.814412 2757 kubelet.go:480] "Attempting to sync node with API server" Apr 14 01:06:59.825919 kubelet[2757]: I0414 01:06:59.814448 2757 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 01:06:59.825919 kubelet[2757]: I0414 01:06:59.814490 2757 kubelet.go:386] "Adding apiserver pod source" Apr 14 01:06:59.825919 kubelet[2757]: I0414 01:06:59.814510 2757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 01:06:59.825919 kubelet[2757]: I0414 01:06:59.825721 2757 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 01:06:59.830705 kubelet[2757]: I0414 01:06:59.826457 2757 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 01:06:59.923859 kubelet[2757]: I0414 01:06:59.922278 2757 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 01:06:59.923859 kubelet[2757]: I0414 01:06:59.922470 2757 server.go:1289] "Started kubelet" Apr 14 01:06:59.924973 kubelet[2757]: I0414 01:06:59.924130 2757 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 01:06:59.924973 kubelet[2757]: I0414 01:06:59.924166 2757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 01:06:59.924973 kubelet[2757]: I0414 01:06:59.924465 2757 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 01:06:59.975374 kubelet[2757]: E0414 01:06:59.974775 2757 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 01:06:59.987190 kubelet[2757]: I0414 01:06:59.976888 2757 server.go:317] "Adding debug handlers to kubelet server" Apr 14 01:06:59.987190 kubelet[2757]: I0414 01:06:59.982600 2757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 01:06:59.987190 kubelet[2757]: I0414 01:06:59.987042 2757 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 01:06:59.995596 kubelet[2757]: I0414 01:06:59.990361 2757 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 01:07:00.022003 kubelet[2757]: I0414 01:07:00.021291 2757 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 01:07:00.022973 kubelet[2757]: I0414 01:07:00.022270 2757 reconciler.go:26] "Reconciler: start to sync state" Apr 14 01:07:00.034074 kubelet[2757]: I0414 01:07:00.034010 2757 factory.go:223] Registration of the containerd container factory successfully Apr 14 01:07:00.034074 kubelet[2757]: I0414 01:07:00.034044 2757 factory.go:223] Registration of the systemd container factory successfully Apr 14 01:07:00.034434 kubelet[2757]: I0414 01:07:00.034156 2757 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 01:07:00.211618 kubelet[2757]: I0414 01:07:00.211174 2757 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 01:07:00.216004 kubelet[2757]: I0414 01:07:00.214450 2757 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 01:07:00.216004 kubelet[2757]: I0414 01:07:00.214488 2757 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 01:07:00.216004 kubelet[2757]: I0414 01:07:00.214522 2757 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 01:07:00.216004 kubelet[2757]: I0414 01:07:00.214530 2757 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 01:07:00.218631 kubelet[2757]: E0414 01:07:00.214588 2757 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 01:07:00.319197 kubelet[2757]: E0414 01:07:00.318712 2757 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 01:07:00.321100 kubelet[2757]: I0414 01:07:00.320174 2757 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 01:07:00.321100 kubelet[2757]: I0414 01:07:00.320575 2757 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 01:07:00.321100 kubelet[2757]: I0414 01:07:00.320599 2757 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:07:00.321100 kubelet[2757]: I0414 01:07:00.320886 2757 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 01:07:00.321100 kubelet[2757]: I0414 01:07:00.320895 2757 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 01:07:00.321100 kubelet[2757]: I0414 01:07:00.320915 2757 policy_none.go:49] "None policy: Start" Apr 14 01:07:00.321100 kubelet[2757]: I0414 01:07:00.320958 2757 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 01:07:00.321100 kubelet[2757]: I0414 01:07:00.320971 2757 state_mem.go:35] "Initializing new in-memory state store" Apr 14 01:07:00.321100 kubelet[2757]: I0414 01:07:00.321077 2757 state_mem.go:75] "Updated machine memory state" Apr 14 01:07:00.375896 kubelet[2757]: E0414 01:07:00.373616 2757 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 01:07:00.375896 kubelet[2757]: I0414 01:07:00.374917 2757 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 01:07:00.375896 kubelet[2757]: I0414 01:07:00.374988 2757 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 01:07:00.375896 kubelet[2757]: I0414 01:07:00.375513 2757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 01:07:00.376959 kubelet[2757]: I0414 01:07:00.376648 2757 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 01:07:00.383488 containerd[1483]: time="2026-04-14T01:07:00.377824921Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 01:07:00.421161 kubelet[2757]: I0414 01:07:00.389638 2757 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 01:07:00.421161 kubelet[2757]: E0414 01:07:00.405459 2757 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 01:07:00.600764 kubelet[2757]: I0414 01:07:00.556568 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/219e82b34589536605d26af4f7ee5cbb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"219e82b34589536605d26af4f7ee5cbb\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:07:00.600764 kubelet[2757]: I0414 01:07:00.557793 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:07:00.600764 kubelet[2757]: I0414 01:07:00.557836 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:07:00.600764 kubelet[2757]: I0414 01:07:00.557861 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:07:00.600764 kubelet[2757]: I0414 01:07:00.557879 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:07:00.637320 kubelet[2757]: I0414 01:07:00.557897 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:07:00.637320 kubelet[2757]: I0414 01:07:00.557919 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 01:07:00.637320 kubelet[2757]: I0414 01:07:00.557964 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/219e82b34589536605d26af4f7ee5cbb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"219e82b34589536605d26af4f7ee5cbb\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:07:00.637320 kubelet[2757]: I0414 01:07:00.557985 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/219e82b34589536605d26af4f7ee5cbb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"219e82b34589536605d26af4f7ee5cbb\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:07:00.637320 kubelet[2757]: I0414 01:07:00.557059 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:07:00.637320 kubelet[2757]: I0414 01:07:00.559372 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:07:00.637320 kubelet[2757]: I0414 01:07:00.593175 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:07:00.637320 kubelet[2757]: I0414 01:07:00.610674 2757 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:07:00.680610 kubelet[2757]: E0414 01:07:00.678610 2757 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 14 01:07:00.680610 kubelet[2757]: E0414 01:07:00.679462 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:00.680610 kubelet[2757]: E0414 01:07:00.679600 2757 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:07:00.680610 kubelet[2757]: E0414 01:07:00.679885 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:00.697215 kubelet[2757]: E0414 01:07:00.688079 2757 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 01:07:00.697215 kubelet[2757]: E0414 01:07:00.689102 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:00.742490 kubelet[2757]: I0414 01:07:00.741881 2757 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 01:07:00.742490 kubelet[2757]: I0414 01:07:00.742483 2757 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 01:07:00.826241 kubelet[2757]: I0414 01:07:00.824848 2757 apiserver.go:52] "Watching apiserver" Apr 14 01:07:00.881632 systemd[1]: Created slice kubepods-besteffort-pod2d27da8c_04c5_491f_8a67_eb9b7ee5c4d4.slice - libcontainer container kubepods-besteffort-pod2d27da8c_04c5_491f_8a67_eb9b7ee5c4d4.slice. Apr 14 01:07:00.928125 kubelet[2757]: I0414 01:07:00.925697 2757 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 01:07:00.977002 kubelet[2757]: I0414 01:07:00.976057 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d27da8c-04c5-491f-8a67-eb9b7ee5c4d4-xtables-lock\") pod \"kube-proxy-8zlz9\" (UID: \"2d27da8c-04c5-491f-8a67-eb9b7ee5c4d4\") " pod="kube-system/kube-proxy-8zlz9" Apr 14 01:07:00.977002 kubelet[2757]: I0414 01:07:00.976109 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d27da8c-04c5-491f-8a67-eb9b7ee5c4d4-lib-modules\") pod \"kube-proxy-8zlz9\" (UID: \"2d27da8c-04c5-491f-8a67-eb9b7ee5c4d4\") " pod="kube-system/kube-proxy-8zlz9" Apr 14 01:07:00.977002 kubelet[2757]: I0414 01:07:00.976138 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d27da8c-04c5-491f-8a67-eb9b7ee5c4d4-kube-proxy\") pod \"kube-proxy-8zlz9\" (UID: \"2d27da8c-04c5-491f-8a67-eb9b7ee5c4d4\") " pod="kube-system/kube-proxy-8zlz9" Apr 14 01:07:00.977002 kubelet[2757]: I0414 01:07:00.976165 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w5x4\" (UniqueName: \"kubernetes.io/projected/2d27da8c-04c5-491f-8a67-eb9b7ee5c4d4-kube-api-access-6w5x4\") pod \"kube-proxy-8zlz9\" (UID: \"2d27da8c-04c5-491f-8a67-eb9b7ee5c4d4\") " pod="kube-system/kube-proxy-8zlz9" Apr 14 01:07:01.296425 kubelet[2757]: I0414 01:07:01.295126 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:07:01.313979 kubelet[2757]: E0414 01:07:01.305206 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:01.330663 kubelet[2757]: E0414 01:07:01.330585 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:01.373866 kubelet[2757]: E0414 01:07:01.373259 2757 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 14 01:07:01.373866 kubelet[2757]: E0414 01:07:01.373975 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:01.538987 kubelet[2757]: E0414 01:07:01.525013 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:01.560715 containerd[1483]: time="2026-04-14T01:07:01.527356984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8zlz9,Uid:2d27da8c-04c5-491f-8a67-eb9b7ee5c4d4,Namespace:kube-system,Attempt:0,}" Apr 14 01:07:02.062326 containerd[1483]: time="2026-04-14T01:07:02.060025980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:07:02.062326 containerd[1483]: time="2026-04-14T01:07:02.060188764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:07:02.062326 containerd[1483]: time="2026-04-14T01:07:02.060212955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:07:02.062326 containerd[1483]: time="2026-04-14T01:07:02.060513894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:07:02.263191 systemd[1]: Started cri-containerd-4eecfc85aa1dc59d20a9e3d1a22dd74499f9ab23f78c2babbf9779adf80da9df.scope - libcontainer container 4eecfc85aa1dc59d20a9e3d1a22dd74499f9ab23f78c2babbf9779adf80da9df. Apr 14 01:07:02.300091 kubelet[2757]: E0414 01:07:02.299844 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:02.302215 kubelet[2757]: E0414 01:07:02.302163 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:02.552178 containerd[1483]: time="2026-04-14T01:07:02.529843212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8zlz9,Uid:2d27da8c-04c5-491f-8a67-eb9b7ee5c4d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4eecfc85aa1dc59d20a9e3d1a22dd74499f9ab23f78c2babbf9779adf80da9df\"" Apr 14 01:07:02.561787 kubelet[2757]: E0414 01:07:02.561351 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:02.585702 containerd[1483]: time="2026-04-14T01:07:02.585465882Z" level=info msg="CreateContainer within sandbox \"4eecfc85aa1dc59d20a9e3d1a22dd74499f9ab23f78c2babbf9779adf80da9df\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 01:07:02.719365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355847158.mount: Deactivated successfully. Apr 14 01:07:02.774430 kubelet[2757]: E0414 01:07:02.769479 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:02.838713 containerd[1483]: time="2026-04-14T01:07:02.827051066Z" level=info msg="CreateContainer within sandbox \"4eecfc85aa1dc59d20a9e3d1a22dd74499f9ab23f78c2babbf9779adf80da9df\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50baeae1397ad5cc59772e94bfd8a8cae8598c38aaead72fd119701d572b2030\"" Apr 14 01:07:02.851218 containerd[1483]: time="2026-04-14T01:07:02.849455352Z" level=info msg="StartContainer for \"50baeae1397ad5cc59772e94bfd8a8cae8598c38aaead72fd119701d572b2030\"" Apr 14 01:07:03.412195 kubelet[2757]: E0414 01:07:03.399603 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:03.412195 kubelet[2757]: E0414 01:07:03.400322 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:03.415470 systemd[1]: run-containerd-runc-k8s.io-50baeae1397ad5cc59772e94bfd8a8cae8598c38aaead72fd119701d572b2030-runc.qh2KK7.mount: Deactivated successfully. Apr 14 01:07:03.528020 systemd[1]: Started cri-containerd-50baeae1397ad5cc59772e94bfd8a8cae8598c38aaead72fd119701d572b2030.scope - libcontainer container 50baeae1397ad5cc59772e94bfd8a8cae8598c38aaead72fd119701d572b2030. Apr 14 01:07:03.855792 containerd[1483]: time="2026-04-14T01:07:03.855173028Z" level=info msg="StartContainer for \"50baeae1397ad5cc59772e94bfd8a8cae8598c38aaead72fd119701d572b2030\" returns successfully" Apr 14 01:07:04.420283 kubelet[2757]: E0414 01:07:04.420210 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:05.438102 kubelet[2757]: E0414 01:07:05.437888 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:05.619217 kubelet[2757]: I0414 01:07:05.607643 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8zlz9" podStartSLOduration=6.6076101529999995 podStartE2EDuration="6.607610153s" podCreationTimestamp="2026-04-14 01:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:07:04.636374189 +0000 UTC m=+5.810488105" watchObservedRunningTime="2026-04-14 01:07:05.607610153 +0000 UTC m=+6.781724075" Apr 14 01:07:05.733911 systemd[1]: Created slice kubepods-besteffort-pod6779a995_1b82_43dc_b0e3_b0962171f698.slice - libcontainer container kubepods-besteffort-pod6779a995_1b82_43dc_b0e3_b0962171f698.slice. Apr 14 01:07:05.788223 kubelet[2757]: I0414 01:07:05.787560 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6779a995-1b82-43dc-b0e3-b0962171f698-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-9m98v\" (UID: \"6779a995-1b82-43dc-b0e3-b0962171f698\") " pod="tigera-operator/tigera-operator-6bf85f8dd-9m98v" Apr 14 01:07:05.788223 kubelet[2757]: I0414 01:07:05.787642 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf49j\" (UniqueName: \"kubernetes.io/projected/6779a995-1b82-43dc-b0e3-b0962171f698-kube-api-access-zf49j\") pod \"tigera-operator-6bf85f8dd-9m98v\" (UID: \"6779a995-1b82-43dc-b0e3-b0962171f698\") " pod="tigera-operator/tigera-operator-6bf85f8dd-9m98v" Apr 14 01:07:06.078313 containerd[1483]: time="2026-04-14T01:07:06.053700404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-9m98v,Uid:6779a995-1b82-43dc-b0e3-b0962171f698,Namespace:tigera-operator,Attempt:0,}" Apr 14 01:07:06.305217 containerd[1483]: time="2026-04-14T01:07:06.303214661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:07:06.305217 containerd[1483]: time="2026-04-14T01:07:06.303443002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:07:06.305217 containerd[1483]: time="2026-04-14T01:07:06.303608076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:07:06.305217 containerd[1483]: time="2026-04-14T01:07:06.303971900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:07:06.566731 systemd[1]: Started cri-containerd-1f4374ac55eaa4f27d03aa60f9134edced6add26f5eadab99e1cd7cd272f04a5.scope - libcontainer container 1f4374ac55eaa4f27d03aa60f9134edced6add26f5eadab99e1cd7cd272f04a5. Apr 14 01:07:07.162380 containerd[1483]: time="2026-04-14T01:07:07.160717210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-9m98v,Uid:6779a995-1b82-43dc-b0e3-b0962171f698,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1f4374ac55eaa4f27d03aa60f9134edced6add26f5eadab99e1cd7cd272f04a5\"" Apr 14 01:07:07.188143 containerd[1483]: time="2026-04-14T01:07:07.187722089Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 14 01:07:07.493686 kubelet[2757]: E0414 01:07:07.493053 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:08.526775 kubelet[2757]: E0414 01:07:08.485419 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:09.539683 kubelet[2757]: E0414 01:07:09.518165 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:07:12.167387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326731352.mount: Deactivated successfully. Apr 14 01:07:28.425303 containerd[1483]: time="2026-04-14T01:07:28.419126421Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:07:28.428907 containerd[1483]: time="2026-04-14T01:07:28.428464423Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 14 01:07:28.433438 containerd[1483]: time="2026-04-14T01:07:28.432758293Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:07:28.439713 containerd[1483]: time="2026-04-14T01:07:28.439600181Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:07:28.441071 containerd[1483]: time="2026-04-14T01:07:28.440682976Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 21.252886093s" Apr 14 01:07:28.441071 containerd[1483]: time="2026-04-14T01:07:28.440747251Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 14 01:07:28.492455 containerd[1483]: time="2026-04-14T01:07:28.492037515Z" level=info msg="CreateContainer within sandbox \"1f4374ac55eaa4f27d03aa60f9134edced6add26f5eadab99e1cd7cd272f04a5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 14 01:07:28.674909 containerd[1483]: time="2026-04-14T01:07:28.674798634Z" level=info msg="CreateContainer within sandbox \"1f4374ac55eaa4f27d03aa60f9134edced6add26f5eadab99e1cd7cd272f04a5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"40c985b623ea0ef5e2e05b42d1596ff9e40ecb8c6cb6b1a67cab3c8ab298b42b\"" Apr 14 01:07:28.678057 containerd[1483]: time="2026-04-14T01:07:28.676360088Z" level=info msg="StartContainer for \"40c985b623ea0ef5e2e05b42d1596ff9e40ecb8c6cb6b1a67cab3c8ab298b42b\"" Apr 14 01:07:28.841669 systemd[1]: Started cri-containerd-40c985b623ea0ef5e2e05b42d1596ff9e40ecb8c6cb6b1a67cab3c8ab298b42b.scope - libcontainer container 40c985b623ea0ef5e2e05b42d1596ff9e40ecb8c6cb6b1a67cab3c8ab298b42b. Apr 14 01:07:29.117765 update_engine[1473]: I20260414 01:07:29.117590 1473 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 14 01:07:29.117765 update_engine[1473]: I20260414 01:07:29.117707 1473 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 14 01:07:29.118440 update_engine[1473]: I20260414 01:07:29.118002 1473 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 14 01:07:29.118879 update_engine[1473]: I20260414 01:07:29.118852 1473 omaha_request_params.cc:62] Current group set to lts Apr 14 01:07:29.120301 update_engine[1473]: I20260414 01:07:29.120234 1473 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 14 01:07:29.120301 update_engine[1473]: I20260414 01:07:29.120287 1473 update_attempter.cc:643] Scheduling an action processor start. Apr 14 01:07:29.120420 update_engine[1473]: I20260414 01:07:29.120314 1473 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 01:07:29.120420 update_engine[1473]: I20260414 01:07:29.120355 1473 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 14 01:07:29.128876 update_engine[1473]: I20260414 01:07:29.125319 1473 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 01:07:29.128876 update_engine[1473]: I20260414 01:07:29.127697 1473 omaha_request_action.cc:272] Request: Apr 14 01:07:29.128876 update_engine[1473]: Apr 14 01:07:29.128876 update_engine[1473]: Apr 14 01:07:29.128876 update_engine[1473]: Apr 14 01:07:29.128876 update_engine[1473]: Apr 14 01:07:29.128876 update_engine[1473]: Apr 14 01:07:29.128876 update_engine[1473]: Apr 14 01:07:29.128876 update_engine[1473]: Apr 14 01:07:29.128876 update_engine[1473]: Apr 14 01:07:29.128876 update_engine[1473]: I20260414 01:07:29.127727 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 01:07:29.129436 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 14 01:07:29.188110 update_engine[1473]: I20260414 01:07:29.186271 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 01:07:29.188110 update_engine[1473]: I20260414 01:07:29.187990 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 01:07:29.203583 containerd[1483]: time="2026-04-14T01:07:29.200693660Z" level=info msg="StartContainer for \"40c985b623ea0ef5e2e05b42d1596ff9e40ecb8c6cb6b1a67cab3c8ab298b42b\" returns successfully" Apr 14 01:07:29.205618 update_engine[1473]: E20260414 01:07:29.205426 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 01:07:29.205618 update_engine[1473]: I20260414 01:07:29.205580 1473 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 14 01:07:39.125248 update_engine[1473]: I20260414 01:07:39.120373 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 01:07:39.125248 update_engine[1473]: I20260414 01:07:39.124410 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 01:07:39.126389 update_engine[1473]: I20260414 01:07:39.126355 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 01:07:39.142310 update_engine[1473]: E20260414 01:07:39.141074 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 01:07:39.142310 update_engine[1473]: I20260414 01:07:39.141762 1473 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 14 01:07:47.665373 sudo[1652]: pam_unix(sudo:session): session closed for user root Apr 14 01:07:47.687494 sshd[1648]: pam_unix(sshd:session): session closed for user core Apr 14 01:07:47.700569 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:57052.service: Deactivated successfully. Apr 14 01:07:47.726634 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 01:07:47.727056 systemd[1]: session-7.scope: Consumed 18.523s CPU time, 164.1M memory peak, 0B memory swap peak. Apr 14 01:07:47.765360 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Apr 14 01:07:47.800758 systemd-logind[1472]: Removed session 7. Apr 14 01:07:49.123149 update_engine[1473]: I20260414 01:07:49.120487 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 01:07:49.137789 update_engine[1473]: I20260414 01:07:49.135720 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 01:07:49.143642 update_engine[1473]: I20260414 01:07:49.143556 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 01:07:49.171739 update_engine[1473]: E20260414 01:07:49.171473 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 01:07:49.171739 update_engine[1473]: I20260414 01:07:49.171617 1473 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 14 01:07:59.127990 update_engine[1473]: I20260414 01:07:59.127027 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 01:07:59.137780 update_engine[1473]: I20260414 01:07:59.132073 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 01:07:59.137780 update_engine[1473]: I20260414 01:07:59.136837 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 01:07:59.147985 update_engine[1473]: E20260414 01:07:59.147886 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 01:07:59.148139 update_engine[1473]: I20260414 01:07:59.148015 1473 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 01:07:59.148139 update_engine[1473]: I20260414 01:07:59.148027 1473 omaha_request_action.cc:617] Omaha request response: Apr 14 01:07:59.148346 update_engine[1473]: E20260414 01:07:59.148304 1473 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 14 01:07:59.148396 update_engine[1473]: I20260414 01:07:59.148365 1473 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 14 01:07:59.148396 update_engine[1473]: I20260414 01:07:59.148375 1473 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 01:07:59.148396 update_engine[1473]: I20260414 01:07:59.148381 1473 update_attempter.cc:306] Processing Done. Apr 14 01:07:59.148467 update_engine[1473]: E20260414 01:07:59.148398 1473 update_attempter.cc:619] Update failed. Apr 14 01:07:59.148467 update_engine[1473]: I20260414 01:07:59.148406 1473 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 14 01:07:59.148467 update_engine[1473]: I20260414 01:07:59.148413 1473 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 14 01:07:59.148467 update_engine[1473]: I20260414 01:07:59.148420 1473 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 14 01:07:59.148591 update_engine[1473]: I20260414 01:07:59.148521 1473 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 01:07:59.148591 update_engine[1473]: I20260414 01:07:59.148550 1473 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 01:07:59.148591 update_engine[1473]: I20260414 01:07:59.148579 1473 omaha_request_action.cc:272] Request: Apr 14 01:07:59.148591 update_engine[1473]: Apr 14 01:07:59.148591 update_engine[1473]: Apr 14 01:07:59.148591 update_engine[1473]: Apr 14 01:07:59.148591 update_engine[1473]: Apr 14 01:07:59.148591 update_engine[1473]: Apr 14 01:07:59.148591 update_engine[1473]: Apr 14 01:07:59.148591 update_engine[1473]: I20260414 01:07:59.148587 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 01:07:59.148862 update_engine[1473]: I20260414 01:07:59.148816 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 01:07:59.149006 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 14 01:07:59.149372 update_engine[1473]: I20260414 01:07:59.149060 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 01:07:59.159344 update_engine[1473]: E20260414 01:07:59.158820 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 01:07:59.159344 update_engine[1473]: I20260414 01:07:59.158911 1473 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 01:07:59.159344 update_engine[1473]: I20260414 01:07:59.158921 1473 omaha_request_action.cc:617] Omaha request response: Apr 14 01:07:59.159344 update_engine[1473]: I20260414 01:07:59.158962 1473 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 01:07:59.159344 update_engine[1473]: I20260414 01:07:59.158968 1473 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 01:07:59.159344 update_engine[1473]: I20260414 01:07:59.158974 1473 update_attempter.cc:306] Processing Done. Apr 14 01:07:59.159344 update_engine[1473]: I20260414 01:07:59.158983 1473 update_attempter.cc:310] Error event sent. Apr 14 01:07:59.159344 update_engine[1473]: I20260414 01:07:59.158998 1473 update_check_scheduler.cc:74] Next update check in 42m14s Apr 14 01:07:59.164234 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 14 01:08:03.795011 kubelet[2757]: I0414 01:08:03.793744 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-9m98v" podStartSLOduration=37.534688162 podStartE2EDuration="58.793718994s" podCreationTimestamp="2026-04-14 01:07:05 +0000 UTC" firstStartedPulling="2026-04-14 01:07:07.183825498 +0000 UTC m=+8.357939420" lastFinishedPulling="2026-04-14 01:07:28.44285633 +0000 UTC m=+29.616970252" observedRunningTime="2026-04-14 01:07:29.456766013 +0000 UTC m=+30.630879944" watchObservedRunningTime="2026-04-14 01:08:03.793718994 +0000 UTC m=+64.967832924" Apr 14 01:08:03.831659 systemd[1]: Created slice kubepods-besteffort-pod17a21796_1b8a_45bf_9a90_73482c92a207.slice - libcontainer container kubepods-besteffort-pod17a21796_1b8a_45bf_9a90_73482c92a207.slice. Apr 14 01:08:03.865166 kubelet[2757]: I0414 01:08:03.862974 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17a21796-1b8a-45bf-9a90-73482c92a207-tigera-ca-bundle\") pod \"calico-typha-7c47ddf557-bvd6d\" (UID: \"17a21796-1b8a-45bf-9a90-73482c92a207\") " pod="calico-system/calico-typha-7c47ddf557-bvd6d" Apr 14 01:08:03.964480 kubelet[2757]: I0414 01:08:03.964032 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/17a21796-1b8a-45bf-9a90-73482c92a207-typha-certs\") pod \"calico-typha-7c47ddf557-bvd6d\" (UID: \"17a21796-1b8a-45bf-9a90-73482c92a207\") " pod="calico-system/calico-typha-7c47ddf557-bvd6d" Apr 14 01:08:03.964480 kubelet[2757]: I0414 01:08:03.964220 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8plbg\" (UniqueName: \"kubernetes.io/projected/17a21796-1b8a-45bf-9a90-73482c92a207-kube-api-access-8plbg\") pod \"calico-typha-7c47ddf557-bvd6d\" (UID: \"17a21796-1b8a-45bf-9a90-73482c92a207\") " pod="calico-system/calico-typha-7c47ddf557-bvd6d" Apr 14 01:08:04.442678 systemd[1]: Created slice kubepods-besteffort-pod7b36f875_3aa8_45e4_a83c_a6d82ec5ae7c.slice - libcontainer container kubepods-besteffort-pod7b36f875_3aa8_45e4_a83c_a6d82ec5ae7c.slice. Apr 14 01:08:04.465053 kubelet[2757]: E0414 01:08:04.462017 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:08:04.475049 kubelet[2757]: I0414 01:08:04.470496 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-policysync\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.475049 kubelet[2757]: I0414 01:08:04.470572 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-tigera-ca-bundle\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.475049 kubelet[2757]: I0414 01:08:04.470594 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-nodeproc\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.475049 kubelet[2757]: I0414 01:08:04.470610 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-sys-fs\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.475049 kubelet[2757]: I0414 01:08:04.470627 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-cni-net-dir\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478178 kubelet[2757]: I0414 01:08:04.470644 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-lib-modules\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478178 kubelet[2757]: I0414 01:08:04.470663 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-node-certs\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478178 kubelet[2757]: I0414 01:08:04.470682 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-cni-log-dir\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478178 kubelet[2757]: I0414 01:08:04.472815 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-bpffs\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478178 kubelet[2757]: I0414 01:08:04.472874 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-var-run-calico\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478332 kubelet[2757]: I0414 01:08:04.472894 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-xtables-lock\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478332 kubelet[2757]: I0414 01:08:04.472971 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-cni-bin-dir\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478332 kubelet[2757]: I0414 01:08:04.472994 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-flexvol-driver-host\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478332 kubelet[2757]: I0414 01:08:04.473019 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-var-lib-calico\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478332 kubelet[2757]: I0414 01:08:04.473042 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqxh\" (UniqueName: \"kubernetes.io/projected/7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c-kube-api-access-fsqxh\") pod \"calico-node-p58wt\" (UID: \"7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c\") " pod="calico-system/calico-node-p58wt" Apr 14 01:08:04.478479 kubelet[2757]: E0414 01:08:04.478082 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:04.493287 containerd[1483]: time="2026-04-14T01:08:04.491153291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c47ddf557-bvd6d,Uid:17a21796-1b8a-45bf-9a90-73482c92a207,Namespace:calico-system,Attempt:0,}" Apr 14 01:08:04.591191 kubelet[2757]: I0414 01:08:04.584837 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ebecdfa2-d197-4725-b3b6-ab6cb5334f6e-varrun\") pod \"csi-node-driver-4k4bw\" (UID: \"ebecdfa2-d197-4725-b3b6-ab6cb5334f6e\") " pod="calico-system/csi-node-driver-4k4bw" Apr 14 01:08:04.591191 kubelet[2757]: I0414 01:08:04.585003 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebecdfa2-d197-4725-b3b6-ab6cb5334f6e-kubelet-dir\") pod \"csi-node-driver-4k4bw\" (UID: \"ebecdfa2-d197-4725-b3b6-ab6cb5334f6e\") " pod="calico-system/csi-node-driver-4k4bw" Apr 14 01:08:04.591191 kubelet[2757]: I0414 01:08:04.585031 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ebecdfa2-d197-4725-b3b6-ab6cb5334f6e-socket-dir\") pod \"csi-node-driver-4k4bw\" (UID: \"ebecdfa2-d197-4725-b3b6-ab6cb5334f6e\") " pod="calico-system/csi-node-driver-4k4bw" Apr 14 01:08:04.591191 kubelet[2757]: I0414 01:08:04.585078 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twpb\" (UniqueName: \"kubernetes.io/projected/ebecdfa2-d197-4725-b3b6-ab6cb5334f6e-kube-api-access-5twpb\") pod \"csi-node-driver-4k4bw\" (UID: \"ebecdfa2-d197-4725-b3b6-ab6cb5334f6e\") " pod="calico-system/csi-node-driver-4k4bw" Apr 14 01:08:04.591191 kubelet[2757]: I0414 01:08:04.585282 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ebecdfa2-d197-4725-b3b6-ab6cb5334f6e-registration-dir\") pod \"csi-node-driver-4k4bw\" (UID: \"ebecdfa2-d197-4725-b3b6-ab6cb5334f6e\") " pod="calico-system/csi-node-driver-4k4bw" Apr 14 01:08:04.601498 kubelet[2757]: E0414 01:08:04.601464 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.602604 kubelet[2757]: W0414 01:08:04.601737 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.602881 kubelet[2757]: E0414 01:08:04.602836 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.610032 kubelet[2757]: E0414 01:08:04.609877 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.620232 kubelet[2757]: W0414 01:08:04.616380 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.620232 kubelet[2757]: E0414 01:08:04.620064 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.630660 kubelet[2757]: E0414 01:08:04.630088 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.630660 kubelet[2757]: W0414 01:08:04.630118 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.630660 kubelet[2757]: E0414 01:08:04.630142 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.631088 kubelet[2757]: E0414 01:08:04.631040 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.631088 kubelet[2757]: W0414 01:08:04.631056 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.631088 kubelet[2757]: E0414 01:08:04.631074 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.631610 kubelet[2757]: E0414 01:08:04.631534 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.631610 kubelet[2757]: W0414 01:08:04.631550 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.631610 kubelet[2757]: E0414 01:08:04.631564 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.649439 kubelet[2757]: E0414 01:08:04.649368 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.649439 kubelet[2757]: W0414 01:08:04.649427 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.649834 kubelet[2757]: E0414 01:08:04.649566 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.669234 kubelet[2757]: E0414 01:08:04.668996 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.669234 kubelet[2757]: W0414 01:08:04.669043 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.669234 kubelet[2757]: E0414 01:08:04.669077 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.688431 kubelet[2757]: E0414 01:08:04.680743 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.688431 kubelet[2757]: W0414 01:08:04.680793 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.688431 kubelet[2757]: E0414 01:08:04.683073 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.688431 kubelet[2757]: E0414 01:08:04.685517 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.688431 kubelet[2757]: W0414 01:08:04.685539 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.688431 kubelet[2757]: E0414 01:08:04.685561 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.699463 kubelet[2757]: E0414 01:08:04.697513 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.699463 kubelet[2757]: W0414 01:08:04.698917 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.699463 kubelet[2757]: E0414 01:08:04.699137 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.713127 kubelet[2757]: E0414 01:08:04.702666 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.713622 containerd[1483]: time="2026-04-14T01:08:04.701059997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:08:04.713744 kubelet[2757]: W0414 01:08:04.713643 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.713744 kubelet[2757]: E0414 01:08:04.713686 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.727433 kubelet[2757]: E0414 01:08:04.716388 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.727433 kubelet[2757]: W0414 01:08:04.716413 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.727433 kubelet[2757]: E0414 01:08:04.716435 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.728687 containerd[1483]: time="2026-04-14T01:08:04.704791142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:08:04.728687 containerd[1483]: time="2026-04-14T01:08:04.704865534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:04.728687 containerd[1483]: time="2026-04-14T01:08:04.709870425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:04.731165 kubelet[2757]: E0414 01:08:04.731134 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.731303 kubelet[2757]: W0414 01:08:04.731285 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.731489 kubelet[2757]: E0414 01:08:04.731474 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.734837 kubelet[2757]: E0414 01:08:04.734809 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.735024 kubelet[2757]: W0414 01:08:04.735006 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.735133 kubelet[2757]: E0414 01:08:04.735119 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.735567 kubelet[2757]: E0414 01:08:04.735550 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.735663 kubelet[2757]: W0414 01:08:04.735651 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.735748 kubelet[2757]: E0414 01:08:04.735734 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.739997 kubelet[2757]: E0414 01:08:04.739921 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.740166 kubelet[2757]: W0414 01:08:04.740148 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.740244 kubelet[2757]: E0414 01:08:04.740233 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.740698 kubelet[2757]: E0414 01:08:04.740684 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.740804 kubelet[2757]: W0414 01:08:04.740793 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.740864 kubelet[2757]: E0414 01:08:04.740855 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.741158 kubelet[2757]: E0414 01:08:04.741147 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.741216 kubelet[2757]: W0414 01:08:04.741209 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.741265 kubelet[2757]: E0414 01:08:04.741257 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.741548 kubelet[2757]: E0414 01:08:04.741538 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.741608 kubelet[2757]: W0414 01:08:04.741601 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.741653 kubelet[2757]: E0414 01:08:04.741645 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.751801 kubelet[2757]: E0414 01:08:04.750602 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.751801 kubelet[2757]: W0414 01:08:04.750675 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.751801 kubelet[2757]: E0414 01:08:04.750865 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.757442 containerd[1483]: time="2026-04-14T01:08:04.757193985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p58wt,Uid:7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c,Namespace:calico-system,Attempt:0,}" Apr 14 01:08:04.757603 kubelet[2757]: E0414 01:08:04.757303 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.757603 kubelet[2757]: W0414 01:08:04.757326 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.757603 kubelet[2757]: E0414 01:08:04.757357 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.757773 kubelet[2757]: E0414 01:08:04.757685 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.757773 kubelet[2757]: W0414 01:08:04.757698 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.761797 kubelet[2757]: E0414 01:08:04.759832 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.761797 kubelet[2757]: E0414 01:08:04.761084 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.761797 kubelet[2757]: W0414 01:08:04.761262 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.761797 kubelet[2757]: E0414 01:08:04.761290 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.771873 kubelet[2757]: E0414 01:08:04.771544 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.772991 kubelet[2757]: W0414 01:08:04.772954 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.782328 kubelet[2757]: E0414 01:08:04.778490 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.782328 kubelet[2757]: E0414 01:08:04.781979 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.782328 kubelet[2757]: W0414 01:08:04.782004 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.782328 kubelet[2757]: E0414 01:08:04.782040 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.797456 kubelet[2757]: E0414 01:08:04.789700 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.797456 kubelet[2757]: W0414 01:08:04.796583 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.797456 kubelet[2757]: E0414 01:08:04.796773 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.800842 kubelet[2757]: E0414 01:08:04.800607 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.800842 kubelet[2757]: W0414 01:08:04.800637 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.800842 kubelet[2757]: E0414 01:08:04.800659 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.813162 kubelet[2757]: E0414 01:08:04.813004 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.813162 kubelet[2757]: W0414 01:08:04.813069 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.813162 kubelet[2757]: E0414 01:08:04.813095 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.816329 kubelet[2757]: E0414 01:08:04.814411 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.816329 kubelet[2757]: W0414 01:08:04.814432 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.816329 kubelet[2757]: E0414 01:08:04.814475 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.816329 kubelet[2757]: E0414 01:08:04.816096 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.816329 kubelet[2757]: W0414 01:08:04.816111 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.816329 kubelet[2757]: E0414 01:08:04.816127 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.821500 kubelet[2757]: E0414 01:08:04.820056 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.821500 kubelet[2757]: W0414 01:08:04.820074 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.821500 kubelet[2757]: E0414 01:08:04.820093 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.823352 kubelet[2757]: E0414 01:08:04.823332 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.823534 kubelet[2757]: W0414 01:08:04.823521 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.823731 kubelet[2757]: E0414 01:08:04.823630 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.829647 kubelet[2757]: E0414 01:08:04.829157 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.829647 kubelet[2757]: W0414 01:08:04.829184 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.829647 kubelet[2757]: E0414 01:08:04.829208 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.830785 kubelet[2757]: E0414 01:08:04.830657 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.830785 kubelet[2757]: W0414 01:08:04.830757 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.831168 kubelet[2757]: E0414 01:08:04.830830 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.836344 kubelet[2757]: E0414 01:08:04.836276 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.836344 kubelet[2757]: W0414 01:08:04.836324 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.836344 kubelet[2757]: E0414 01:08:04.836354 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.839595 systemd[1]: Started cri-containerd-b1721b813de73704839ddcb91a4813606cda8606118a0af3f1378309b65ad471.scope - libcontainer container b1721b813de73704839ddcb91a4813606cda8606118a0af3f1378309b65ad471. Apr 14 01:08:04.854982 kubelet[2757]: E0414 01:08:04.848310 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.854982 kubelet[2757]: W0414 01:08:04.848341 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.854982 kubelet[2757]: E0414 01:08:04.848485 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.867464 kubelet[2757]: E0414 01:08:04.855207 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.867464 kubelet[2757]: W0414 01:08:04.855385 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.867464 kubelet[2757]: E0414 01:08:04.855610 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.867464 kubelet[2757]: E0414 01:08:04.867279 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.867464 kubelet[2757]: W0414 01:08:04.867352 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.867464 kubelet[2757]: E0414 01:08:04.867398 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.878697 kubelet[2757]: E0414 01:08:04.878593 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.878697 kubelet[2757]: W0414 01:08:04.878652 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.878697 kubelet[2757]: E0414 01:08:04.878827 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.885996 kubelet[2757]: E0414 01:08:04.885886 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.885996 kubelet[2757]: W0414 01:08:04.885915 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.885996 kubelet[2757]: E0414 01:08:04.885985 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.923881 kubelet[2757]: E0414 01:08:04.923844 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:04.924199 kubelet[2757]: W0414 01:08:04.924128 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:04.924199 kubelet[2757]: E0414 01:08:04.924160 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:04.938898 containerd[1483]: time="2026-04-14T01:08:04.938297009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:08:04.938898 containerd[1483]: time="2026-04-14T01:08:04.938505429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:08:04.938898 containerd[1483]: time="2026-04-14T01:08:04.938525746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:04.938898 containerd[1483]: time="2026-04-14T01:08:04.938670649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:05.061113 systemd[1]: Started cri-containerd-0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f.scope - libcontainer container 0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f. Apr 14 01:08:05.151501 containerd[1483]: time="2026-04-14T01:08:05.151257945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c47ddf557-bvd6d,Uid:17a21796-1b8a-45bf-9a90-73482c92a207,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1721b813de73704839ddcb91a4813606cda8606118a0af3f1378309b65ad471\"" Apr 14 01:08:05.197221 kubelet[2757]: E0414 01:08:05.187077 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:08:05.216911 containerd[1483]: time="2026-04-14T01:08:05.193914058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 14 01:08:05.287618 containerd[1483]: time="2026-04-14T01:08:05.287313390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p58wt,Uid:7b36f875-3aa8-45e4-a83c-a6d82ec5ae7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f\"" Apr 14 01:08:06.221131 kubelet[2757]: E0414 01:08:06.218776 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:07.303169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301425146.mount: Deactivated successfully. Apr 14 01:08:08.305764 kubelet[2757]: E0414 01:08:08.304511 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:12.841844 kubelet[2757]: E0414 01:08:12.840791 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:14.191726 kubelet[2757]: E0414 01:08:14.185965 2757 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.969s" Apr 14 01:08:14.971824 containerd[1483]: time="2026-04-14T01:08:14.971561960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:14.977349 containerd[1483]: time="2026-04-14T01:08:14.977228231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 14 01:08:14.981584 containerd[1483]: time="2026-04-14T01:08:14.981505853Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:14.990864 containerd[1483]: time="2026-04-14T01:08:14.990722767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:14.997329 containerd[1483]: time="2026-04-14T01:08:14.997153602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 9.803149869s" Apr 14 01:08:14.997329 containerd[1483]: time="2026-04-14T01:08:14.997224728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 14 01:08:15.011173 containerd[1483]: time="2026-04-14T01:08:15.010423515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 14 01:08:15.084570 containerd[1483]: time="2026-04-14T01:08:15.084299777Z" level=info msg="CreateContainer within sandbox \"b1721b813de73704839ddcb91a4813606cda8606118a0af3f1378309b65ad471\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 14 01:08:15.134038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472464034.mount: Deactivated successfully. Apr 14 01:08:15.145000 containerd[1483]: time="2026-04-14T01:08:15.144855557Z" level=info msg="CreateContainer within sandbox \"b1721b813de73704839ddcb91a4813606cda8606118a0af3f1378309b65ad471\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aa3128e296d1b32e919c30a4df037c02e16110682ff1756dfa2137f36b9b7e38\"" Apr 14 01:08:15.147060 containerd[1483]: time="2026-04-14T01:08:15.145907094Z" level=info msg="StartContainer for \"aa3128e296d1b32e919c30a4df037c02e16110682ff1756dfa2137f36b9b7e38\"" Apr 14 01:08:15.201563 kubelet[2757]: E0414 01:08:15.201438 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:15.258870 systemd[1]: Started cri-containerd-aa3128e296d1b32e919c30a4df037c02e16110682ff1756dfa2137f36b9b7e38.scope - libcontainer container aa3128e296d1b32e919c30a4df037c02e16110682ff1756dfa2137f36b9b7e38. Apr 14 01:08:15.370897 containerd[1483]: time="2026-04-14T01:08:15.370761705Z" level=info msg="StartContainer for \"aa3128e296d1b32e919c30a4df037c02e16110682ff1756dfa2137f36b9b7e38\" returns successfully" Apr 14 01:08:16.224128 kubelet[2757]: E0414 01:08:16.221297 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:08:16.322777 kubelet[2757]: E0414 01:08:16.322388 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.322777 kubelet[2757]: W0414 01:08:16.322423 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.322777 kubelet[2757]: E0414 01:08:16.322449 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.328541 kubelet[2757]: E0414 01:08:16.328348 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.328541 kubelet[2757]: W0414 01:08:16.328410 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.328541 kubelet[2757]: E0414 01:08:16.328648 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.341643 kubelet[2757]: E0414 01:08:16.333559 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.341643 kubelet[2757]: W0414 01:08:16.333594 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.341643 kubelet[2757]: E0414 01:08:16.333714 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.341643 kubelet[2757]: E0414 01:08:16.336830 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.341643 kubelet[2757]: W0414 01:08:16.336860 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.341643 kubelet[2757]: E0414 01:08:16.337018 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.346130 kubelet[2757]: E0414 01:08:16.346067 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.346130 kubelet[2757]: W0414 01:08:16.346111 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.346130 kubelet[2757]: E0414 01:08:16.346144 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.357648 kubelet[2757]: I0414 01:08:16.357388 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c47ddf557-bvd6d" podStartSLOduration=3.543942769 podStartE2EDuration="13.357359488s" podCreationTimestamp="2026-04-14 01:08:03 +0000 UTC" firstStartedPulling="2026-04-14 01:08:05.193478341 +0000 UTC m=+66.367592265" lastFinishedPulling="2026-04-14 01:08:15.006895067 +0000 UTC m=+76.181008984" observedRunningTime="2026-04-14 01:08:16.301416734 +0000 UTC m=+77.475530657" watchObservedRunningTime="2026-04-14 01:08:16.357359488 +0000 UTC m=+77.531473412" Apr 14 01:08:16.365823 kubelet[2757]: E0414 01:08:16.364552 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.365823 kubelet[2757]: W0414 01:08:16.364593 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.365823 kubelet[2757]: E0414 01:08:16.364722 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.367065 kubelet[2757]: E0414 01:08:16.366897 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.367065 kubelet[2757]: W0414 01:08:16.366994 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.367065 kubelet[2757]: E0414 01:08:16.367018 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.368209 kubelet[2757]: E0414 01:08:16.368189 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.368378 kubelet[2757]: W0414 01:08:16.368293 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.368378 kubelet[2757]: E0414 01:08:16.368314 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.369117 kubelet[2757]: E0414 01:08:16.369076 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.369117 kubelet[2757]: W0414 01:08:16.369104 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.369216 kubelet[2757]: E0414 01:08:16.369121 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.369452 kubelet[2757]: E0414 01:08:16.369421 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.369452 kubelet[2757]: W0414 01:08:16.369445 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.369542 kubelet[2757]: E0414 01:08:16.369458 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.370981 kubelet[2757]: E0414 01:08:16.370914 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.373852 kubelet[2757]: W0414 01:08:16.372169 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.373852 kubelet[2757]: E0414 01:08:16.372208 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.379992 kubelet[2757]: E0414 01:08:16.379061 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.379992 kubelet[2757]: W0414 01:08:16.379097 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.379992 kubelet[2757]: E0414 01:08:16.379204 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.385140 kubelet[2757]: E0414 01:08:16.385097 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.385140 kubelet[2757]: W0414 01:08:16.385132 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.386192 kubelet[2757]: E0414 01:08:16.385154 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.386192 kubelet[2757]: E0414 01:08:16.385411 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.386192 kubelet[2757]: W0414 01:08:16.385423 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.386192 kubelet[2757]: E0414 01:08:16.385437 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.386192 kubelet[2757]: E0414 01:08:16.385623 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.386192 kubelet[2757]: W0414 01:08:16.385633 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.386192 kubelet[2757]: E0414 01:08:16.385644 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.387364 kubelet[2757]: E0414 01:08:16.387182 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.387364 kubelet[2757]: W0414 01:08:16.387202 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.387364 kubelet[2757]: E0414 01:08:16.387215 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.387866 kubelet[2757]: E0414 01:08:16.387704 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.387866 kubelet[2757]: W0414 01:08:16.387716 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.387866 kubelet[2757]: E0414 01:08:16.387774 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.390003 kubelet[2757]: E0414 01:08:16.389855 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.390003 kubelet[2757]: W0414 01:08:16.389872 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.390003 kubelet[2757]: E0414 01:08:16.389885 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.395998 kubelet[2757]: E0414 01:08:16.394573 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.395998 kubelet[2757]: W0414 01:08:16.394600 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.395998 kubelet[2757]: E0414 01:08:16.394623 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.396764 kubelet[2757]: E0414 01:08:16.396726 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.396892 kubelet[2757]: W0414 01:08:16.396881 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.397070 kubelet[2757]: E0414 01:08:16.397058 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.397549 kubelet[2757]: E0414 01:08:16.397524 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.397647 kubelet[2757]: W0414 01:08:16.397603 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.397738 kubelet[2757]: E0414 01:08:16.397693 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.406687 kubelet[2757]: E0414 01:08:16.406074 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.407410 kubelet[2757]: W0414 01:08:16.407386 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.410724 kubelet[2757]: E0414 01:08:16.410696 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.411683 kubelet[2757]: E0414 01:08:16.411610 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.411883 kubelet[2757]: W0414 01:08:16.411629 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.411883 kubelet[2757]: E0414 01:08:16.411803 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.412602 kubelet[2757]: E0414 01:08:16.412432 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.412602 kubelet[2757]: W0414 01:08:16.412445 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.412602 kubelet[2757]: E0414 01:08:16.412487 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.413251 kubelet[2757]: E0414 01:08:16.413120 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.413251 kubelet[2757]: W0414 01:08:16.413133 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.413251 kubelet[2757]: E0414 01:08:16.413145 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.413768 kubelet[2757]: E0414 01:08:16.413647 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.413768 kubelet[2757]: W0414 01:08:16.413661 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.413768 kubelet[2757]: E0414 01:08:16.413673 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.414885 kubelet[2757]: E0414 01:08:16.414694 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.414885 kubelet[2757]: W0414 01:08:16.414707 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.414885 kubelet[2757]: E0414 01:08:16.414719 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.415171 kubelet[2757]: E0414 01:08:16.415162 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.415267 kubelet[2757]: W0414 01:08:16.415221 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.415267 kubelet[2757]: E0414 01:08:16.415234 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.420179 kubelet[2757]: E0414 01:08:16.419850 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.421047 kubelet[2757]: W0414 01:08:16.420986 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.421257 kubelet[2757]: E0414 01:08:16.421130 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.421800 kubelet[2757]: E0414 01:08:16.421712 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.421800 kubelet[2757]: W0414 01:08:16.421742 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.421800 kubelet[2757]: E0414 01:08:16.421758 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.422262 kubelet[2757]: E0414 01:08:16.422079 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.422262 kubelet[2757]: W0414 01:08:16.422089 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.422262 kubelet[2757]: E0414 01:08:16.422100 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.422592 kubelet[2757]: E0414 01:08:16.422541 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.422592 kubelet[2757]: W0414 01:08:16.422565 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.422592 kubelet[2757]: E0414 01:08:16.422577 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:16.423284 kubelet[2757]: E0414 01:08:16.423246 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:16.423284 kubelet[2757]: W0414 01:08:16.423275 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:16.423284 kubelet[2757]: E0414 01:08:16.423287 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.108382 containerd[1483]: time="2026-04-14T01:08:17.108150997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:17.109761 containerd[1483]: time="2026-04-14T01:08:17.108904706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 14 01:08:17.111076 containerd[1483]: time="2026-04-14T01:08:17.110341226Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:17.114308 containerd[1483]: time="2026-04-14T01:08:17.113599503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:17.114864 containerd[1483]: time="2026-04-14T01:08:17.114243918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.103757382s" Apr 14 01:08:17.114864 containerd[1483]: time="2026-04-14T01:08:17.114711865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 14 01:08:17.152576 containerd[1483]: time="2026-04-14T01:08:17.151531250Z" level=info msg="CreateContainer within sandbox \"0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 14 01:08:17.190744 containerd[1483]: time="2026-04-14T01:08:17.190657246Z" level=info msg="CreateContainer within sandbox \"0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cdf0e3f044f8fd4b414667b82efb7e01355755c810a8a8c6363d85270bc926d1\"" Apr 14 01:08:17.191735 containerd[1483]: time="2026-04-14T01:08:17.191349567Z" level=info msg="StartContainer for \"cdf0e3f044f8fd4b414667b82efb7e01355755c810a8a8c6363d85270bc926d1\"" Apr 14 01:08:17.215872 kubelet[2757]: E0414 01:08:17.215618 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:17.256311 kubelet[2757]: E0414 01:08:17.256126 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:08:17.298548 systemd[1]: Started cri-containerd-cdf0e3f044f8fd4b414667b82efb7e01355755c810a8a8c6363d85270bc926d1.scope - libcontainer container cdf0e3f044f8fd4b414667b82efb7e01355755c810a8a8c6363d85270bc926d1. Apr 14 01:08:17.314314 kubelet[2757]: E0414 01:08:17.314224 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.314314 kubelet[2757]: W0414 01:08:17.314286 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.314669 kubelet[2757]: E0414 01:08:17.314418 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.316220 kubelet[2757]: E0414 01:08:17.316168 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.316348 kubelet[2757]: W0414 01:08:17.316258 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.316348 kubelet[2757]: E0414 01:08:17.316284 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.316882 kubelet[2757]: E0414 01:08:17.316833 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.316882 kubelet[2757]: W0414 01:08:17.316864 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.316882 kubelet[2757]: E0414 01:08:17.316879 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.317390 kubelet[2757]: E0414 01:08:17.317340 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.317447 kubelet[2757]: W0414 01:08:17.317419 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.317447 kubelet[2757]: E0414 01:08:17.317438 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.318257 kubelet[2757]: E0414 01:08:17.318210 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.318257 kubelet[2757]: W0414 01:08:17.318240 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.318257 kubelet[2757]: E0414 01:08:17.318258 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.318740 kubelet[2757]: E0414 01:08:17.318697 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.319217 kubelet[2757]: W0414 01:08:17.319169 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.319291 kubelet[2757]: E0414 01:08:17.319205 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.319660 kubelet[2757]: E0414 01:08:17.319615 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.328382 kubelet[2757]: W0414 01:08:17.324760 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.328382 kubelet[2757]: E0414 01:08:17.328303 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.338257 kubelet[2757]: E0414 01:08:17.337130 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.338257 kubelet[2757]: W0414 01:08:17.337227 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.338257 kubelet[2757]: E0414 01:08:17.337351 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.343346 kubelet[2757]: E0414 01:08:17.343148 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.343346 kubelet[2757]: W0414 01:08:17.343207 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.343346 kubelet[2757]: E0414 01:08:17.343321 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.344129 kubelet[2757]: E0414 01:08:17.343704 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.344129 kubelet[2757]: W0414 01:08:17.343716 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.344129 kubelet[2757]: E0414 01:08:17.343730 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.344491 kubelet[2757]: E0414 01:08:17.344454 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.344491 kubelet[2757]: W0414 01:08:17.344479 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.344581 kubelet[2757]: E0414 01:08:17.344492 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.346174 kubelet[2757]: E0414 01:08:17.346138 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.346174 kubelet[2757]: W0414 01:08:17.346163 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.346174 kubelet[2757]: E0414 01:08:17.346177 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.347323 kubelet[2757]: E0414 01:08:17.347281 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.347323 kubelet[2757]: W0414 01:08:17.347310 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.347323 kubelet[2757]: E0414 01:08:17.347323 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.348261 kubelet[2757]: E0414 01:08:17.348215 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.348261 kubelet[2757]: W0414 01:08:17.348242 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.348261 kubelet[2757]: E0414 01:08:17.348255 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.349014 kubelet[2757]: E0414 01:08:17.348896 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.349014 kubelet[2757]: W0414 01:08:17.348920 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.349014 kubelet[2757]: E0414 01:08:17.348962 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.415275 kubelet[2757]: E0414 01:08:17.415047 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.415275 kubelet[2757]: W0414 01:08:17.415170 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.415458 kubelet[2757]: E0414 01:08:17.415305 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.417499 kubelet[2757]: E0414 01:08:17.417444 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:08:17.417499 kubelet[2757]: W0414 01:08:17.417484 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:08:17.417651 kubelet[2757]: E0414 01:08:17.417508 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:08:17.421118 containerd[1483]: time="2026-04-14T01:08:17.420542864Z" level=info msg="StartContainer for \"cdf0e3f044f8fd4b414667b82efb7e01355755c810a8a8c6363d85270bc926d1\" returns successfully" Apr 14 01:08:17.427081 systemd[1]: cri-containerd-cdf0e3f044f8fd4b414667b82efb7e01355755c810a8a8c6363d85270bc926d1.scope: Deactivated successfully. Apr 14 01:08:17.498536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdf0e3f044f8fd4b414667b82efb7e01355755c810a8a8c6363d85270bc926d1-rootfs.mount: Deactivated successfully. Apr 14 01:08:17.528403 containerd[1483]: time="2026-04-14T01:08:17.528198394Z" level=info msg="shim disconnected" id=cdf0e3f044f8fd4b414667b82efb7e01355755c810a8a8c6363d85270bc926d1 namespace=k8s.io Apr 14 01:08:17.528403 containerd[1483]: time="2026-04-14T01:08:17.528285743Z" level=warning msg="cleaning up after shim disconnected" id=cdf0e3f044f8fd4b414667b82efb7e01355755c810a8a8c6363d85270bc926d1 namespace=k8s.io Apr 14 01:08:17.528403 containerd[1483]: time="2026-04-14T01:08:17.528298902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:08:18.239754 kubelet[2757]: E0414 01:08:18.235153 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:08:18.316298 containerd[1483]: time="2026-04-14T01:08:18.310887452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 14 01:08:19.215973 kubelet[2757]: E0414 01:08:19.215736 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:20.233978 kubelet[2757]: E0414 01:08:20.233395 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:08:21.221838 kubelet[2757]: E0414 01:08:21.215702 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:23.218400 kubelet[2757]: E0414 01:08:23.218162 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:25.221138 kubelet[2757]: E0414 01:08:25.216403 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:26.216315 kubelet[2757]: E0414 01:08:26.216196 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:08:27.216492 kubelet[2757]: E0414 01:08:27.215881 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:29.224903 kubelet[2757]: E0414 01:08:29.224525 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:29.633649 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:49672.service - OpenSSH per-connection server daemon (10.0.0.1:49672). Apr 14 01:08:29.736477 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 49672 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:08:29.749448 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:08:29.765287 systemd-logind[1472]: New session 8 of user core. Apr 14 01:08:29.778875 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 01:08:30.036708 sshd[3529]: pam_unix(sshd:session): session closed for user core Apr 14 01:08:30.042872 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:49672.service: Deactivated successfully. Apr 14 01:08:30.068233 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 01:08:30.076268 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Apr 14 01:08:30.088472 systemd-logind[1472]: Removed session 8. Apr 14 01:08:30.219391 kubelet[2757]: E0414 01:08:30.218835 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:08:31.218962 kubelet[2757]: E0414 01:08:31.215526 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:33.217713 kubelet[2757]: E0414 01:08:33.217424 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:34.993665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763925874.mount: Deactivated successfully. Apr 14 01:08:35.118760 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:49682.service - OpenSSH per-connection server daemon (10.0.0.1:49682). Apr 14 01:08:35.155519 containerd[1483]: time="2026-04-14T01:08:35.155447800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 14 01:08:35.155519 containerd[1483]: time="2026-04-14T01:08:35.155504872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:35.158331 containerd[1483]: time="2026-04-14T01:08:35.158231927Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:35.182138 containerd[1483]: time="2026-04-14T01:08:35.182038300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:35.185156 containerd[1483]: time="2026-04-14T01:08:35.185093614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 16.874117883s" Apr 14 01:08:35.185156 containerd[1483]: time="2026-04-14T01:08:35.185149274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 14 01:08:35.219978 kubelet[2757]: E0414 01:08:35.217235 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:35.258711 containerd[1483]: time="2026-04-14T01:08:35.256138149Z" level=info msg="CreateContainer within sandbox \"0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 14 01:08:35.310117 containerd[1483]: time="2026-04-14T01:08:35.309916841Z" level=info msg="CreateContainer within sandbox \"0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"c9de6e2d1ce685924ac1a4a8c22382510b18f6338f8d80bde16ef6abf1bc7799\"" Apr 14 01:08:35.312624 containerd[1483]: time="2026-04-14T01:08:35.312466530Z" level=info msg="StartContainer for \"c9de6e2d1ce685924ac1a4a8c22382510b18f6338f8d80bde16ef6abf1bc7799\"" Apr 14 01:08:35.315204 sshd[3548]: Accepted publickey for core from 10.0.0.1 port 49682 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:08:35.327350 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:08:35.369008 systemd-logind[1472]: New session 9 of user core. Apr 14 01:08:35.379792 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 01:08:35.673578 systemd[1]: Started cri-containerd-c9de6e2d1ce685924ac1a4a8c22382510b18f6338f8d80bde16ef6abf1bc7799.scope - libcontainer container c9de6e2d1ce685924ac1a4a8c22382510b18f6338f8d80bde16ef6abf1bc7799. Apr 14 01:08:35.881374 containerd[1483]: time="2026-04-14T01:08:35.880472496Z" level=info msg="StartContainer for \"c9de6e2d1ce685924ac1a4a8c22382510b18f6338f8d80bde16ef6abf1bc7799\" returns successfully" Apr 14 01:08:35.964487 sshd[3548]: pam_unix(sshd:session): session closed for user core Apr 14 01:08:35.990827 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Apr 14 01:08:36.002466 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:49682.service: Deactivated successfully. Apr 14 01:08:36.013525 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 01:08:36.034026 systemd-logind[1472]: Removed session 9. Apr 14 01:08:36.159116 systemd[1]: cri-containerd-c9de6e2d1ce685924ac1a4a8c22382510b18f6338f8d80bde16ef6abf1bc7799.scope: Deactivated successfully. Apr 14 01:08:36.237111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9de6e2d1ce685924ac1a4a8c22382510b18f6338f8d80bde16ef6abf1bc7799-rootfs.mount: Deactivated successfully. Apr 14 01:08:36.441738 containerd[1483]: time="2026-04-14T01:08:36.441363492Z" level=info msg="shim disconnected" id=c9de6e2d1ce685924ac1a4a8c22382510b18f6338f8d80bde16ef6abf1bc7799 namespace=k8s.io Apr 14 01:08:36.441738 containerd[1483]: time="2026-04-14T01:08:36.441427971Z" level=warning msg="cleaning up after shim disconnected" id=c9de6e2d1ce685924ac1a4a8c22382510b18f6338f8d80bde16ef6abf1bc7799 namespace=k8s.io Apr 14 01:08:36.441738 containerd[1483]: time="2026-04-14T01:08:36.441438131Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:08:37.222789 kubelet[2757]: E0414 01:08:37.222148 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:37.518905 containerd[1483]: time="2026-04-14T01:08:37.518483041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 14 01:08:39.225070 kubelet[2757]: E0414 01:08:39.224026 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:41.037034 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:37668.service - OpenSSH per-connection server daemon (10.0.0.1:37668). Apr 14 01:08:41.173443 sshd[3640]: Accepted publickey for core from 10.0.0.1 port 37668 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:08:41.176763 sshd[3640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:08:41.202097 systemd-logind[1472]: New session 10 of user core. Apr 14 01:08:41.221117 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 01:08:41.223620 kubelet[2757]: E0414 01:08:41.216824 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:41.830075 sshd[3640]: pam_unix(sshd:session): session closed for user core Apr 14 01:08:41.859978 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:37668.service: Deactivated successfully. Apr 14 01:08:41.890069 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 01:08:41.895304 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Apr 14 01:08:41.906386 systemd-logind[1472]: Removed session 10. Apr 14 01:08:43.221675 kubelet[2757]: E0414 01:08:43.217432 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:45.220522 kubelet[2757]: E0414 01:08:45.219636 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:46.915695 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:52926.service - OpenSSH per-connection server daemon (10.0.0.1:52926). Apr 14 01:08:47.201316 sshd[3657]: Accepted publickey for core from 10.0.0.1 port 52926 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:08:47.208462 sshd[3657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:08:47.233584 kubelet[2757]: E0414 01:08:47.228712 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:47.232750 systemd-logind[1472]: New session 11 of user core. Apr 14 01:08:47.248912 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 01:08:47.776893 sshd[3657]: pam_unix(sshd:session): session closed for user core Apr 14 01:08:47.786709 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:52926.service: Deactivated successfully. Apr 14 01:08:47.789458 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Apr 14 01:08:47.790554 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 01:08:47.795321 systemd-logind[1472]: Removed session 11. Apr 14 01:08:48.102954 containerd[1483]: time="2026-04-14T01:08:48.102859711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:48.105325 containerd[1483]: time="2026-04-14T01:08:48.105198345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 14 01:08:48.108378 containerd[1483]: time="2026-04-14T01:08:48.108278547Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:48.142324 containerd[1483]: time="2026-04-14T01:08:48.142176923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:48.143320 containerd[1483]: time="2026-04-14T01:08:48.143257594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 10.624698878s" Apr 14 01:08:48.143320 containerd[1483]: time="2026-04-14T01:08:48.143314716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 14 01:08:48.161287 containerd[1483]: time="2026-04-14T01:08:48.161088479Z" level=info msg="CreateContainer within sandbox \"0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 14 01:08:48.220489 containerd[1483]: time="2026-04-14T01:08:48.220387918Z" level=info msg="CreateContainer within sandbox \"0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9\"" Apr 14 01:08:48.221911 containerd[1483]: time="2026-04-14T01:08:48.221079698Z" level=info msg="StartContainer for \"463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9\"" Apr 14 01:08:48.514730 systemd[1]: run-containerd-runc-k8s.io-463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9-runc.ocWRZE.mount: Deactivated successfully. Apr 14 01:08:48.539812 systemd[1]: Started cri-containerd-463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9.scope - libcontainer container 463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9. Apr 14 01:08:48.687856 containerd[1483]: time="2026-04-14T01:08:48.687753347Z" level=info msg="StartContainer for \"463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9\" returns successfully" Apr 14 01:08:49.216267 kubelet[2757]: E0414 01:08:49.215880 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:50.449108 systemd[1]: cri-containerd-463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9.scope: Deactivated successfully. Apr 14 01:08:50.449397 systemd[1]: cri-containerd-463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9.scope: Consumed 1.285s CPU time. Apr 14 01:08:50.548890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9-rootfs.mount: Deactivated successfully. Apr 14 01:08:50.589914 kubelet[2757]: I0414 01:08:50.589764 2757 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 14 01:08:50.594516 containerd[1483]: time="2026-04-14T01:08:50.594314351Z" level=info msg="shim disconnected" id=463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9 namespace=k8s.io Apr 14 01:08:50.594516 containerd[1483]: time="2026-04-14T01:08:50.594467702Z" level=warning msg="cleaning up after shim disconnected" id=463ae60f8197b129719e053552f3a9d3ff20b913650b9a3a25814d7dedbd1ab9 namespace=k8s.io Apr 14 01:08:50.595071 containerd[1483]: time="2026-04-14T01:08:50.594531815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:08:50.843920 systemd[1]: Created slice kubepods-burstable-pod6c8f5369_6e4c_4879_910d_f06910f3f96b.slice - libcontainer container kubepods-burstable-pod6c8f5369_6e4c_4879_910d_f06910f3f96b.slice. Apr 14 01:08:50.865609 kubelet[2757]: I0414 01:08:50.865149 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10b78b6d-5df6-4ebd-9b0e-7c5c5e956100-tigera-ca-bundle\") pod \"calico-kube-controllers-7c8dbf654f-qm855\" (UID: \"10b78b6d-5df6-4ebd-9b0e-7c5c5e956100\") " pod="calico-system/calico-kube-controllers-7c8dbf654f-qm855" Apr 14 01:08:50.865609 kubelet[2757]: I0414 01:08:50.865198 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c8f5369-6e4c-4879-910d-f06910f3f96b-config-volume\") pod \"coredns-674b8bbfcf-qtnrp\" (UID: \"6c8f5369-6e4c-4879-910d-f06910f3f96b\") " pod="kube-system/coredns-674b8bbfcf-qtnrp" Apr 14 01:08:50.865609 kubelet[2757]: I0414 01:08:50.865228 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx8rg\" (UniqueName: \"kubernetes.io/projected/6c8f5369-6e4c-4879-910d-f06910f3f96b-kube-api-access-vx8rg\") pod \"coredns-674b8bbfcf-qtnrp\" (UID: \"6c8f5369-6e4c-4879-910d-f06910f3f96b\") " pod="kube-system/coredns-674b8bbfcf-qtnrp" Apr 14 01:08:50.865609 kubelet[2757]: I0414 01:08:50.865256 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzxfj\" (UniqueName: \"kubernetes.io/projected/10b78b6d-5df6-4ebd-9b0e-7c5c5e956100-kube-api-access-xzxfj\") pod \"calico-kube-controllers-7c8dbf654f-qm855\" (UID: \"10b78b6d-5df6-4ebd-9b0e-7c5c5e956100\") " pod="calico-system/calico-kube-controllers-7c8dbf654f-qm855" Apr 14 01:08:50.869834 containerd[1483]: time="2026-04-14T01:08:50.869715028Z" level=info msg="CreateContainer within sandbox \"0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 14 01:08:50.884303 systemd[1]: Created slice kubepods-besteffort-pod10b78b6d_5df6_4ebd_9b0e_7c5c5e956100.slice - libcontainer container kubepods-besteffort-pod10b78b6d_5df6_4ebd_9b0e_7c5c5e956100.slice. Apr 14 01:08:50.932678 systemd[1]: Created slice kubepods-besteffort-pod597ea105_c0f3_48ad_ac8b_c14e65afc502.slice - libcontainer container kubepods-besteffort-pod597ea105_c0f3_48ad_ac8b_c14e65afc502.slice. Apr 14 01:08:51.015154 kubelet[2757]: I0414 01:08:51.002568 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26rzn\" (UniqueName: \"kubernetes.io/projected/03832b61-493a-446a-b13d-8da3b79bf6be-kube-api-access-26rzn\") pod \"coredns-674b8bbfcf-bdzqh\" (UID: \"03832b61-493a-446a-b13d-8da3b79bf6be\") " pod="kube-system/coredns-674b8bbfcf-bdzqh" Apr 14 01:08:51.015154 kubelet[2757]: I0414 01:08:51.010053 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbzcv\" (UniqueName: \"kubernetes.io/projected/597ea105-c0f3-48ad-ac8b-c14e65afc502-kube-api-access-nbzcv\") pod \"calico-apiserver-6b98884ffd-5drbf\" (UID: \"597ea105-c0f3-48ad-ac8b-c14e65afc502\") " pod="calico-system/calico-apiserver-6b98884ffd-5drbf" Apr 14 01:08:51.015154 kubelet[2757]: I0414 01:08:51.010087 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-nf2k5\" (UID: \"7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7\") " pod="calico-system/goldmane-5b85766d88-nf2k5" Apr 14 01:08:51.015154 kubelet[2757]: I0414 01:08:51.010116 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7-config\") pod \"goldmane-5b85766d88-nf2k5\" (UID: \"7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7\") " pod="calico-system/goldmane-5b85766d88-nf2k5" Apr 14 01:08:51.015154 kubelet[2757]: I0414 01:08:51.010144 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wmd\" (UniqueName: \"kubernetes.io/projected/7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7-kube-api-access-j2wmd\") pod \"goldmane-5b85766d88-nf2k5\" (UID: \"7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7\") " pod="calico-system/goldmane-5b85766d88-nf2k5" Apr 14 01:08:51.023829 kubelet[2757]: I0414 01:08:51.010172 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/597ea105-c0f3-48ad-ac8b-c14e65afc502-calico-apiserver-certs\") pod \"calico-apiserver-6b98884ffd-5drbf\" (UID: \"597ea105-c0f3-48ad-ac8b-c14e65afc502\") " pod="calico-system/calico-apiserver-6b98884ffd-5drbf" Apr 14 01:08:51.025062 kubelet[2757]: I0414 01:08:51.011390 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7-goldmane-key-pair\") pod \"goldmane-5b85766d88-nf2k5\" (UID: \"7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7\") " pod="calico-system/goldmane-5b85766d88-nf2k5" Apr 14 01:08:51.029976 kubelet[2757]: I0414 01:08:51.025170 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16f4d282-7525-45fa-9798-7274ea91b7f6-calico-apiserver-certs\") pod \"calico-apiserver-6b98884ffd-t64fc\" (UID: \"16f4d282-7525-45fa-9798-7274ea91b7f6\") " pod="calico-system/calico-apiserver-6b98884ffd-t64fc" Apr 14 01:08:51.030226 kubelet[2757]: I0414 01:08:51.030102 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94zl\" (UniqueName: \"kubernetes.io/projected/16f4d282-7525-45fa-9798-7274ea91b7f6-kube-api-access-j94zl\") pod \"calico-apiserver-6b98884ffd-t64fc\" (UID: \"16f4d282-7525-45fa-9798-7274ea91b7f6\") " pod="calico-system/calico-apiserver-6b98884ffd-t64fc" Apr 14 01:08:51.032240 kubelet[2757]: I0414 01:08:51.032119 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03832b61-493a-446a-b13d-8da3b79bf6be-config-volume\") pod \"coredns-674b8bbfcf-bdzqh\" (UID: \"03832b61-493a-446a-b13d-8da3b79bf6be\") " pod="kube-system/coredns-674b8bbfcf-bdzqh" Apr 14 01:08:51.056679 systemd[1]: Created slice kubepods-burstable-pod03832b61_493a_446a_b13d_8da3b79bf6be.slice - libcontainer container kubepods-burstable-pod03832b61_493a_446a_b13d_8da3b79bf6be.slice. Apr 14 01:08:51.098892 containerd[1483]: time="2026-04-14T01:08:51.091123128Z" level=info msg="CreateContainer within sandbox \"0338a02ba93a40a54de340020435bd3833786a3cb730bc2838ac45a0b95a797f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dbb2027f3d4f6bc5d6d4618942b524dc04736ede5ec02e01506e0b31b9dca80a\"" Apr 14 01:08:51.098892 containerd[1483]: time="2026-04-14T01:08:51.093874566Z" level=info msg="StartContainer for \"dbb2027f3d4f6bc5d6d4618942b524dc04736ede5ec02e01506e0b31b9dca80a\"" Apr 14 01:08:51.144721 kubelet[2757]: I0414 01:08:51.142897 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2389b192-c3d5-480f-b72a-94e73783396e-nginx-config\") pod \"whisker-597645cf5b-qdbbf\" (UID: \"2389b192-c3d5-480f-b72a-94e73783396e\") " pod="calico-system/whisker-597645cf5b-qdbbf" Apr 14 01:08:51.144721 kubelet[2757]: I0414 01:08:51.143092 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2389b192-c3d5-480f-b72a-94e73783396e-whisker-backend-key-pair\") pod \"whisker-597645cf5b-qdbbf\" (UID: \"2389b192-c3d5-480f-b72a-94e73783396e\") " pod="calico-system/whisker-597645cf5b-qdbbf" Apr 14 01:08:51.144721 kubelet[2757]: I0414 01:08:51.143136 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2389b192-c3d5-480f-b72a-94e73783396e-whisker-ca-bundle\") pod \"whisker-597645cf5b-qdbbf\" (UID: \"2389b192-c3d5-480f-b72a-94e73783396e\") " pod="calico-system/whisker-597645cf5b-qdbbf" Apr 14 01:08:51.144721 kubelet[2757]: I0414 01:08:51.143159 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvbn\" (UniqueName: \"kubernetes.io/projected/2389b192-c3d5-480f-b72a-94e73783396e-kube-api-access-lkvbn\") pod \"whisker-597645cf5b-qdbbf\" (UID: \"2389b192-c3d5-480f-b72a-94e73783396e\") " pod="calico-system/whisker-597645cf5b-qdbbf" Apr 14 01:08:51.193375 systemd[1]: Created slice kubepods-besteffort-pod16f4d282_7525_45fa_9798_7274ea91b7f6.slice - libcontainer container kubepods-besteffort-pod16f4d282_7525_45fa_9798_7274ea91b7f6.slice. Apr 14 01:08:51.210653 containerd[1483]: time="2026-04-14T01:08:51.210532487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8dbf654f-qm855,Uid:10b78b6d-5df6-4ebd-9b0e-7c5c5e956100,Namespace:calico-system,Attempt:0,}" Apr 14 01:08:51.232900 kubelet[2757]: E0414 01:08:51.230085 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:08:51.265080 containerd[1483]: time="2026-04-14T01:08:51.263862088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qtnrp,Uid:6c8f5369-6e4c-4879-910d-f06910f3f96b,Namespace:kube-system,Attempt:0,}" Apr 14 01:08:51.402300 kubelet[2757]: E0414 01:08:51.399127 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:08:51.406739 containerd[1483]: time="2026-04-14T01:08:51.406308577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98884ffd-5drbf,Uid:597ea105-c0f3-48ad-ac8b-c14e65afc502,Namespace:calico-system,Attempt:0,}" Apr 14 01:08:51.407268 containerd[1483]: time="2026-04-14T01:08:51.407172383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bdzqh,Uid:03832b61-493a-446a-b13d-8da3b79bf6be,Namespace:kube-system,Attempt:0,}" Apr 14 01:08:51.476567 systemd[1]: Started cri-containerd-dbb2027f3d4f6bc5d6d4618942b524dc04736ede5ec02e01506e0b31b9dca80a.scope - libcontainer container dbb2027f3d4f6bc5d6d4618942b524dc04736ede5ec02e01506e0b31b9dca80a. Apr 14 01:08:51.544568 systemd[1]: Created slice kubepods-besteffort-pod7b9a1088_c5ea_4bab_bffb_1a8c0f0d12a7.slice - libcontainer container kubepods-besteffort-pod7b9a1088_c5ea_4bab_bffb_1a8c0f0d12a7.slice. Apr 14 01:08:51.649402 containerd[1483]: time="2026-04-14T01:08:51.642409021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98884ffd-t64fc,Uid:16f4d282-7525-45fa-9798-7274ea91b7f6,Namespace:calico-system,Attempt:0,}" Apr 14 01:08:51.759630 containerd[1483]: time="2026-04-14T01:08:51.755594822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nf2k5,Uid:7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7,Namespace:calico-system,Attempt:0,}" Apr 14 01:08:51.823257 systemd[1]: Created slice kubepods-besteffort-pod2389b192_c3d5_480f_b72a_94e73783396e.slice - libcontainer container kubepods-besteffort-pod2389b192_c3d5_480f_b72a_94e73783396e.slice. Apr 14 01:08:51.904028 containerd[1483]: time="2026-04-14T01:08:51.903909012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-597645cf5b-qdbbf,Uid:2389b192-c3d5-480f-b72a-94e73783396e,Namespace:calico-system,Attempt:0,}" Apr 14 01:08:51.905433 systemd[1]: Created slice kubepods-besteffort-podebecdfa2_d197_4725_b3b6_ab6cb5334f6e.slice - libcontainer container kubepods-besteffort-podebecdfa2_d197_4725_b3b6_ab6cb5334f6e.slice. Apr 14 01:08:51.907347 containerd[1483]: time="2026-04-14T01:08:51.906739576Z" level=info msg="StartContainer for \"dbb2027f3d4f6bc5d6d4618942b524dc04736ede5ec02e01506e0b31b9dca80a\" returns successfully" Apr 14 01:08:51.946822 containerd[1483]: time="2026-04-14T01:08:51.946640118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k4bw,Uid:ebecdfa2-d197-4725-b3b6-ab6cb5334f6e,Namespace:calico-system,Attempt:0,}" Apr 14 01:08:52.821612 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:52942.service - OpenSSH per-connection server daemon (10.0.0.1:52942). Apr 14 01:08:52.827811 containerd[1483]: time="2026-04-14T01:08:52.827361042Z" level=error msg="Failed to destroy network for sandbox \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.839312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf-shm.mount: Deactivated successfully. Apr 14 01:08:52.846010 containerd[1483]: time="2026-04-14T01:08:52.844078809Z" level=error msg="encountered an error cleaning up failed sandbox \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.846010 containerd[1483]: time="2026-04-14T01:08:52.844195450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98884ffd-5drbf,Uid:597ea105-c0f3-48ad-ac8b-c14e65afc502,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.857248 containerd[1483]: time="2026-04-14T01:08:52.857141672Z" level=error msg="Failed to destroy network for sandbox \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.866880 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2-shm.mount: Deactivated successfully. Apr 14 01:08:52.886028 kubelet[2757]: E0414 01:08:52.885877 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.886997 kubelet[2757]: E0414 01:08:52.886967 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b98884ffd-5drbf" Apr 14 01:08:52.887158 kubelet[2757]: E0414 01:08:52.887139 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b98884ffd-5drbf" Apr 14 01:08:52.887328 kubelet[2757]: E0414 01:08:52.887298 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b98884ffd-5drbf_calico-system(597ea105-c0f3-48ad-ac8b-c14e65afc502)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b98884ffd-5drbf_calico-system(597ea105-c0f3-48ad-ac8b-c14e65afc502)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b98884ffd-5drbf" podUID="597ea105-c0f3-48ad-ac8b-c14e65afc502" Apr 14 01:08:52.888062 containerd[1483]: time="2026-04-14T01:08:52.886969897Z" level=error msg="encountered an error cleaning up failed sandbox \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.888237 containerd[1483]: time="2026-04-14T01:08:52.888206739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qtnrp,Uid:6c8f5369-6e4c-4879-910d-f06910f3f96b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.888694 kubelet[2757]: E0414 01:08:52.888673 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.888801 kubelet[2757]: E0414 01:08:52.888785 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qtnrp" Apr 14 01:08:52.888861 kubelet[2757]: E0414 01:08:52.888849 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qtnrp" Apr 14 01:08:52.895139 kubelet[2757]: E0414 01:08:52.894811 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qtnrp_kube-system(6c8f5369-6e4c-4879-910d-f06910f3f96b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qtnrp_kube-system(6c8f5369-6e4c-4879-910d-f06910f3f96b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qtnrp" podUID="6c8f5369-6e4c-4879-910d-f06910f3f96b" Apr 14 01:08:52.945782 containerd[1483]: time="2026-04-14T01:08:52.945698875Z" level=error msg="Failed to destroy network for sandbox \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.950240 containerd[1483]: time="2026-04-14T01:08:52.950170119Z" level=error msg="encountered an error cleaning up failed sandbox \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.953089 containerd[1483]: time="2026-04-14T01:08:52.953014212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bdzqh,Uid:03832b61-493a-446a-b13d-8da3b79bf6be,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.953682 kubelet[2757]: E0414 01:08:52.953641 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:52.953831 kubelet[2757]: E0414 01:08:52.953815 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bdzqh" Apr 14 01:08:52.953914 kubelet[2757]: E0414 01:08:52.953898 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bdzqh" Apr 14 01:08:52.954091 kubelet[2757]: E0414 01:08:52.954063 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bdzqh_kube-system(03832b61-493a-446a-b13d-8da3b79bf6be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bdzqh_kube-system(03832b61-493a-446a-b13d-8da3b79bf6be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bdzqh" podUID="03832b61-493a-446a-b13d-8da3b79bf6be" Apr 14 01:08:52.957812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86-shm.mount: Deactivated successfully. Apr 14 01:08:53.028022 containerd[1483]: time="2026-04-14T01:08:53.027980627Z" level=error msg="Failed to destroy network for sandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.040392 kubelet[2757]: I0414 01:08:53.030337 2757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" Apr 14 01:08:53.047188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b-shm.mount: Deactivated successfully. Apr 14 01:08:53.047717 containerd[1483]: time="2026-04-14T01:08:53.047656316Z" level=error msg="encountered an error cleaning up failed sandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.048646 containerd[1483]: time="2026-04-14T01:08:53.048132380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nf2k5,Uid:7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.048762 kubelet[2757]: E0414 01:08:53.048366 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.048762 kubelet[2757]: E0414 01:08:53.048434 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-nf2k5" Apr 14 01:08:53.048762 kubelet[2757]: E0414 01:08:53.048505 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-nf2k5" Apr 14 01:08:53.048881 kubelet[2757]: E0414 01:08:53.048567 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-nf2k5_calico-system(7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-nf2k5_calico-system(7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-nf2k5" podUID="7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7" Apr 14 01:08:53.061146 kubelet[2757]: I0414 01:08:53.060636 2757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" Apr 14 01:08:53.063414 containerd[1483]: time="2026-04-14T01:08:53.060903216Z" level=error msg="Failed to destroy network for sandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.075098 containerd[1483]: time="2026-04-14T01:08:53.074729811Z" level=info msg="StopPodSandbox for \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\"" Apr 14 01:08:53.080708 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 52942 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:08:53.081109 containerd[1483]: time="2026-04-14T01:08:53.080075996Z" level=error msg="Failed to destroy network for sandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.081109 containerd[1483]: time="2026-04-14T01:08:53.080423769Z" level=info msg="StopPodSandbox for \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\"" Apr 14 01:08:53.081109 containerd[1483]: time="2026-04-14T01:08:53.080581783Z" level=error msg="encountered an error cleaning up failed sandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.081109 containerd[1483]: time="2026-04-14T01:08:53.080625948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8dbf654f-qm855,Uid:10b78b6d-5df6-4ebd-9b0e-7c5c5e956100,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.081109 containerd[1483]: time="2026-04-14T01:08:53.080920293Z" level=error msg="encountered an error cleaning up failed sandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.081109 containerd[1483]: time="2026-04-14T01:08:53.080989574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k4bw,Uid:ebecdfa2-d197-4725-b3b6-ab6cb5334f6e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.083161 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:08:53.084189 containerd[1483]: time="2026-04-14T01:08:53.083214872Z" level=info msg="Ensure that sandbox 17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf in task-service has been cleanup successfully" Apr 14 01:08:53.084189 containerd[1483]: time="2026-04-14T01:08:53.084024863Z" level=info msg="Ensure that sandbox 130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86 in task-service has been cleanup successfully" Apr 14 01:08:53.084313 kubelet[2757]: E0414 01:08:53.084042 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.084626 kubelet[2757]: E0414 01:08:53.084574 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4k4bw" Apr 14 01:08:53.088023 kubelet[2757]: E0414 01:08:53.084494 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.088023 kubelet[2757]: E0414 01:08:53.088052 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c8dbf654f-qm855" Apr 14 01:08:53.088023 kubelet[2757]: E0414 01:08:53.088083 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c8dbf654f-qm855" Apr 14 01:08:53.088280 kubelet[2757]: E0414 01:08:53.088150 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c8dbf654f-qm855_calico-system(10b78b6d-5df6-4ebd-9b0e-7c5c5e956100)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c8dbf654f-qm855_calico-system(10b78b6d-5df6-4ebd-9b0e-7c5c5e956100)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c8dbf654f-qm855" podUID="10b78b6d-5df6-4ebd-9b0e-7c5c5e956100" Apr 14 01:08:53.089195 kubelet[2757]: E0414 01:08:53.088574 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4k4bw" Apr 14 01:08:53.090176 kubelet[2757]: E0414 01:08:53.090149 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4k4bw_calico-system(ebecdfa2-d197-4725-b3b6-ab6cb5334f6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4k4bw_calico-system(ebecdfa2-d197-4725-b3b6-ab6cb5334f6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4k4bw" podUID="ebecdfa2-d197-4725-b3b6-ab6cb5334f6e" Apr 14 01:08:53.114159 systemd-logind[1472]: New session 12 of user core. Apr 14 01:08:53.128303 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 01:08:53.143012 containerd[1483]: time="2026-04-14T01:08:53.142347802Z" level=error msg="Failed to destroy network for sandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.147472 containerd[1483]: time="2026-04-14T01:08:53.147344827Z" level=error msg="encountered an error cleaning up failed sandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.147472 containerd[1483]: time="2026-04-14T01:08:53.147438523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98884ffd-t64fc,Uid:16f4d282-7525-45fa-9798-7274ea91b7f6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.152274 kubelet[2757]: E0414 01:08:53.152217 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.152274 kubelet[2757]: E0414 01:08:53.152281 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b98884ffd-t64fc" Apr 14 01:08:53.152410 kubelet[2757]: E0414 01:08:53.152299 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b98884ffd-t64fc" Apr 14 01:08:53.152410 kubelet[2757]: E0414 01:08:53.152346 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b98884ffd-t64fc_calico-system(16f4d282-7525-45fa-9798-7274ea91b7f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b98884ffd-t64fc_calico-system(16f4d282-7525-45fa-9798-7274ea91b7f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b98884ffd-t64fc" podUID="16f4d282-7525-45fa-9798-7274ea91b7f6" Apr 14 01:08:53.190123 kubelet[2757]: I0414 01:08:53.188887 2757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" Apr 14 01:08:53.190225 containerd[1483]: time="2026-04-14T01:08:53.189750807Z" level=info msg="StopPodSandbox for \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\"" Apr 14 01:08:53.190316 containerd[1483]: time="2026-04-14T01:08:53.190269323Z" level=info msg="Ensure that sandbox c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2 in task-service has been cleanup successfully" Apr 14 01:08:53.555063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e-shm.mount: Deactivated successfully. Apr 14 01:08:53.555215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081-shm.mount: Deactivated successfully. Apr 14 01:08:53.555298 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e-shm.mount: Deactivated successfully. Apr 14 01:08:53.602743 containerd[1483]: time="2026-04-14T01:08:53.581095968Z" level=error msg="Failed to destroy network for sandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.634374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf-shm.mount: Deactivated successfully. Apr 14 01:08:53.643662 containerd[1483]: time="2026-04-14T01:08:53.638801119Z" level=error msg="encountered an error cleaning up failed sandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.667272 containerd[1483]: time="2026-04-14T01:08:53.662718739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-597645cf5b-qdbbf,Uid:2389b192-c3d5-480f-b72a-94e73783396e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.673603 containerd[1483]: time="2026-04-14T01:08:53.670339550Z" level=error msg="StopPodSandbox for \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\" failed" error="failed to destroy network for sandbox \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.673603 containerd[1483]: time="2026-04-14T01:08:53.670987678Z" level=error msg="StopPodSandbox for \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\" failed" error="failed to destroy network for sandbox \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.690922 kubelet[2757]: E0414 01:08:53.675271 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.690922 kubelet[2757]: E0414 01:08:53.675409 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-597645cf5b-qdbbf" Apr 14 01:08:53.690922 kubelet[2757]: E0414 01:08:53.675440 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-597645cf5b-qdbbf" Apr 14 01:08:53.691159 kubelet[2757]: E0414 01:08:53.675624 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-597645cf5b-qdbbf_calico-system(2389b192-c3d5-480f-b72a-94e73783396e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-597645cf5b-qdbbf_calico-system(2389b192-c3d5-480f-b72a-94e73783396e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-597645cf5b-qdbbf" podUID="2389b192-c3d5-480f-b72a-94e73783396e" Apr 14 01:08:53.695647 kubelet[2757]: E0414 01:08:53.692586 2757 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" Apr 14 01:08:53.695647 kubelet[2757]: E0414 01:08:53.692655 2757 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86"} Apr 14 01:08:53.695647 kubelet[2757]: E0414 01:08:53.692717 2757 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03832b61-493a-446a-b13d-8da3b79bf6be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 01:08:53.695647 kubelet[2757]: E0414 01:08:53.692755 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03832b61-493a-446a-b13d-8da3b79bf6be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bdzqh" podUID="03832b61-493a-446a-b13d-8da3b79bf6be" Apr 14 01:08:53.701856 kubelet[2757]: E0414 01:08:53.692798 2757 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" Apr 14 01:08:53.701856 kubelet[2757]: E0414 01:08:53.692830 2757 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf"} Apr 14 01:08:53.701856 kubelet[2757]: E0414 01:08:53.692854 2757 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"597ea105-c0f3-48ad-ac8b-c14e65afc502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 01:08:53.701856 kubelet[2757]: E0414 01:08:53.692888 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"597ea105-c0f3-48ad-ac8b-c14e65afc502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b98884ffd-5drbf" podUID="597ea105-c0f3-48ad-ac8b-c14e65afc502" Apr 14 01:08:53.778696 systemd[1]: run-containerd-runc-k8s.io-dbb2027f3d4f6bc5d6d4618942b524dc04736ede5ec02e01506e0b31b9dca80a-runc.zuxnvL.mount: Deactivated successfully. Apr 14 01:08:53.787367 containerd[1483]: time="2026-04-14T01:08:53.784963386Z" level=error msg="StopPodSandbox for \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\" failed" error="failed to destroy network for sandbox \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:08:53.789031 kubelet[2757]: E0414 01:08:53.787607 2757 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" Apr 14 01:08:53.789031 kubelet[2757]: E0414 01:08:53.787792 2757 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2"} Apr 14 01:08:53.789031 kubelet[2757]: E0414 01:08:53.787846 2757 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c8f5369-6e4c-4879-910d-f06910f3f96b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 01:08:53.789031 kubelet[2757]: E0414 01:08:53.787878 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c8f5369-6e4c-4879-910d-f06910f3f96b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qtnrp" podUID="6c8f5369-6e4c-4879-910d-f06910f3f96b" Apr 14 01:08:53.888749 kubelet[2757]: I0414 01:08:53.888646 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p58wt" podStartSLOduration=7.035000921 podStartE2EDuration="49.888630184s" podCreationTimestamp="2026-04-14 01:08:04 +0000 UTC" firstStartedPulling="2026-04-14 01:08:05.291018359 +0000 UTC m=+66.465132281" lastFinishedPulling="2026-04-14 01:08:48.144647625 +0000 UTC m=+109.318761544" observedRunningTime="2026-04-14 01:08:53.187340374 +0000 UTC m=+114.361454319" watchObservedRunningTime="2026-04-14 01:08:53.888630184 +0000 UTC m=+115.062744133" Apr 14 01:08:53.925501 sshd[4029]: pam_unix(sshd:session): session closed for user core Apr 14 01:08:53.938155 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:52942.service: Deactivated successfully. Apr 14 01:08:53.941340 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 01:08:53.955737 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Apr 14 01:08:53.961771 systemd-logind[1472]: Removed session 12. Apr 14 01:08:54.214612 kubelet[2757]: I0414 01:08:54.214218 2757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:08:54.219839 containerd[1483]: time="2026-04-14T01:08:54.216803691Z" level=info msg="StopPodSandbox for \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\"" Apr 14 01:08:54.248842 containerd[1483]: time="2026-04-14T01:08:54.248689760Z" level=info msg="Ensure that sandbox 0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e in task-service has been cleanup successfully" Apr 14 01:08:54.255872 containerd[1483]: time="2026-04-14T01:08:54.255752689Z" level=info msg="StopPodSandbox for \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\"" Apr 14 01:08:54.268499 containerd[1483]: time="2026-04-14T01:08:54.265974665Z" level=info msg="Ensure that sandbox 8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf in task-service has been cleanup successfully" Apr 14 01:08:54.272182 kubelet[2757]: I0414 01:08:54.269581 2757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:08:54.279497 kubelet[2757]: I0414 01:08:54.274731 2757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:08:54.285831 kubelet[2757]: I0414 01:08:54.284895 2757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:08:54.287405 containerd[1483]: time="2026-04-14T01:08:54.286782776Z" level=info msg="StopPodSandbox for \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\"" Apr 14 01:08:54.287405 containerd[1483]: time="2026-04-14T01:08:54.287004355Z" level=info msg="Ensure that sandbox 8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e in task-service has been cleanup successfully" Apr 14 01:08:54.287594 containerd[1483]: time="2026-04-14T01:08:54.287579290Z" level=info msg="StopPodSandbox for \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\"" Apr 14 01:08:54.288768 kubelet[2757]: I0414 01:08:54.288750 2757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:08:54.295264 containerd[1483]: time="2026-04-14T01:08:54.295193381Z" level=info msg="StopPodSandbox for \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\"" Apr 14 01:08:54.304655 containerd[1483]: time="2026-04-14T01:08:54.304307768Z" level=info msg="Ensure that sandbox d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b in task-service has been cleanup successfully" Apr 14 01:08:54.305397 containerd[1483]: time="2026-04-14T01:08:54.305376990Z" level=info msg="Ensure that sandbox f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081 in task-service has been cleanup successfully" Apr 14 01:08:54.558505 systemd[1]: run-containerd-runc-k8s.io-dbb2027f3d4f6bc5d6d4618942b524dc04736ede5ec02e01506e0b31b9dca80a-runc.T5LRqv.mount: Deactivated successfully. Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.531 [INFO][4196] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.531 [INFO][4196] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" iface="eth0" netns="/var/run/netns/cni-28c5671b-43b8-5f41-f0e0-b7661ae6c2cc" Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.531 [INFO][4196] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" iface="eth0" netns="/var/run/netns/cni-28c5671b-43b8-5f41-f0e0-b7661ae6c2cc" Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.531 [INFO][4196] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" iface="eth0" netns="/var/run/netns/cni-28c5671b-43b8-5f41-f0e0-b7661ae6c2cc" Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.531 [INFO][4196] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.532 [INFO][4196] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.600 [INFO][4263] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" HandleID="k8s-pod-network.d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.606 [INFO][4263] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.606 [INFO][4263] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.635 [WARNING][4263] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" HandleID="k8s-pod-network.d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.635 [INFO][4263] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" HandleID="k8s-pod-network.d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.642 [INFO][4263] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:08:54.681169 containerd[1483]: 2026-04-14 01:08:54.666 [INFO][4196] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:08:54.683345 systemd[1]: run-netns-cni\x2d28c5671b\x2d43b8\x2d5f41\x2df0e0\x2db7661ae6c2cc.mount: Deactivated successfully. Apr 14 01:08:54.685135 containerd[1483]: time="2026-04-14T01:08:54.684979266Z" level=info msg="TearDown network for sandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\" successfully" Apr 14 01:08:54.685135 containerd[1483]: time="2026-04-14T01:08:54.685046870Z" level=info msg="StopPodSandbox for \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\" returns successfully" Apr 14 01:08:54.686375 containerd[1483]: time="2026-04-14T01:08:54.686343161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nf2k5,Uid:7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7,Namespace:calico-system,Attempt:1,}" Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.514 [INFO][4166] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.514 [INFO][4166] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" iface="eth0" netns="/var/run/netns/cni-c8061fa4-ad9a-1ab9-4d16-621cd9ba5b03" Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.515 [INFO][4166] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" iface="eth0" netns="/var/run/netns/cni-c8061fa4-ad9a-1ab9-4d16-621cd9ba5b03" Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.520 [INFO][4166] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" iface="eth0" netns="/var/run/netns/cni-c8061fa4-ad9a-1ab9-4d16-621cd9ba5b03" Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.520 [INFO][4166] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.520 [INFO][4166] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.638 [INFO][4249] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" HandleID="k8s-pod-network.0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.646 [INFO][4249] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.650 [INFO][4249] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.666 [WARNING][4249] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" HandleID="k8s-pod-network.0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.666 [INFO][4249] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" HandleID="k8s-pod-network.0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.671 [INFO][4249] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:08:54.687550 containerd[1483]: 2026-04-14 01:08:54.678 [INFO][4166] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:08:54.688125 containerd[1483]: time="2026-04-14T01:08:54.688097024Z" level=info msg="TearDown network for sandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\" successfully" Apr 14 01:08:54.688170 containerd[1483]: time="2026-04-14T01:08:54.688161696Z" level=info msg="StopPodSandbox for \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\" returns successfully" Apr 14 01:08:54.688953 containerd[1483]: time="2026-04-14T01:08:54.688915891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k4bw,Uid:ebecdfa2-d197-4725-b3b6-ab6cb5334f6e,Namespace:calico-system,Attempt:1,}" Apr 14 01:08:54.690147 systemd[1]: run-netns-cni\x2dc8061fa4\x2dad9a\x2d1ab9\x2d4d16\x2d621cd9ba5b03.mount: Deactivated successfully. Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.597 [INFO][4231] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.600 [INFO][4231] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" iface="eth0" netns="/var/run/netns/cni-d224ded6-ee11-dfa1-bb69-e2dd42c33995" Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.600 [INFO][4231] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" iface="eth0" netns="/var/run/netns/cni-d224ded6-ee11-dfa1-bb69-e2dd42c33995" Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.600 [INFO][4231] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" iface="eth0" netns="/var/run/netns/cni-d224ded6-ee11-dfa1-bb69-e2dd42c33995" Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.600 [INFO][4231] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.601 [INFO][4231] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.694 [INFO][4273] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" HandleID="k8s-pod-network.f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.694 [INFO][4273] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.694 [INFO][4273] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.707 [WARNING][4273] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" HandleID="k8s-pod-network.f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.707 [INFO][4273] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" HandleID="k8s-pod-network.f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.718 [INFO][4273] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:08:54.726188 containerd[1483]: 2026-04-14 01:08:54.720 [INFO][4231] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:08:54.726188 containerd[1483]: time="2026-04-14T01:08:54.725118349Z" level=info msg="TearDown network for sandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\" successfully" Apr 14 01:08:54.726188 containerd[1483]: time="2026-04-14T01:08:54.725160568Z" level=info msg="StopPodSandbox for \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\" returns successfully" Apr 14 01:08:54.730490 containerd[1483]: time="2026-04-14T01:08:54.729904904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98884ffd-t64fc,Uid:16f4d282-7525-45fa-9798-7274ea91b7f6,Namespace:calico-system,Attempt:1,}" Apr 14 01:08:54.729353 systemd[1]: run-netns-cni\x2dd224ded6\x2dee11\x2ddfa1\x2dbb69\x2de2dd42c33995.mount: Deactivated successfully. Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.595 [INFO][4220] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.605 [INFO][4220] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" iface="eth0" netns="/var/run/netns/cni-6b55decf-c3ef-a947-88de-ff8f90cbee8f" Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.606 [INFO][4220] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" iface="eth0" netns="/var/run/netns/cni-6b55decf-c3ef-a947-88de-ff8f90cbee8f" Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.630 [INFO][4220] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" iface="eth0" netns="/var/run/netns/cni-6b55decf-c3ef-a947-88de-ff8f90cbee8f" Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.630 [INFO][4220] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.630 [INFO][4220] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.700 [INFO][4281] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" HandleID="k8s-pod-network.8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.700 [INFO][4281] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.717 [INFO][4281] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.728 [WARNING][4281] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" HandleID="k8s-pod-network.8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.728 [INFO][4281] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" HandleID="k8s-pod-network.8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.730 [INFO][4281] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:08:54.736868 containerd[1483]: 2026-04-14 01:08:54.733 [INFO][4220] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:08:54.739328 containerd[1483]: time="2026-04-14T01:08:54.739165768Z" level=info msg="TearDown network for sandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\" successfully" Apr 14 01:08:54.740276 containerd[1483]: time="2026-04-14T01:08:54.740256195Z" level=info msg="StopPodSandbox for \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\" returns successfully" Apr 14 01:08:54.742292 containerd[1483]: time="2026-04-14T01:08:54.741839279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8dbf654f-qm855,Uid:10b78b6d-5df6-4ebd-9b0e-7c5c5e956100,Namespace:calico-system,Attempt:1,}" Apr 14 01:08:54.741899 systemd[1]: run-netns-cni\x2d6b55decf\x2dc3ef\x2da947\x2d88de\x2dff8f90cbee8f.mount: Deactivated successfully. Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.599 [INFO][4221] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.600 [INFO][4221] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" iface="eth0" netns="/var/run/netns/cni-24412bce-ad6b-5c3a-f6ac-398d7c7f263a" Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.602 [INFO][4221] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" iface="eth0" netns="/var/run/netns/cni-24412bce-ad6b-5c3a-f6ac-398d7c7f263a" Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.611 [INFO][4221] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" iface="eth0" netns="/var/run/netns/cni-24412bce-ad6b-5c3a-f6ac-398d7c7f263a" Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.632 [INFO][4221] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.632 [INFO][4221] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.708 [INFO][4285] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" HandleID="k8s-pod-network.8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Workload="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.717 [INFO][4285] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.730 [INFO][4285] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.745 [WARNING][4285] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" HandleID="k8s-pod-network.8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Workload="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.745 [INFO][4285] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" HandleID="k8s-pod-network.8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Workload="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.750 [INFO][4285] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:08:54.809364 containerd[1483]: 2026-04-14 01:08:54.759 [INFO][4221] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:08:54.859823 containerd[1483]: time="2026-04-14T01:08:54.859532412Z" level=info msg="TearDown network for sandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\" successfully" Apr 14 01:08:54.859823 containerd[1483]: time="2026-04-14T01:08:54.859596609Z" level=info msg="StopPodSandbox for \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\" returns successfully" Apr 14 01:08:55.019868 kubelet[2757]: I0414 01:08:55.015084 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2389b192-c3d5-480f-b72a-94e73783396e-whisker-ca-bundle\") pod \"2389b192-c3d5-480f-b72a-94e73783396e\" (UID: \"2389b192-c3d5-480f-b72a-94e73783396e\") " Apr 14 01:08:55.019868 kubelet[2757]: I0414 01:08:55.015172 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2389b192-c3d5-480f-b72a-94e73783396e-whisker-backend-key-pair\") pod \"2389b192-c3d5-480f-b72a-94e73783396e\" (UID: \"2389b192-c3d5-480f-b72a-94e73783396e\") " Apr 14 01:08:55.019868 kubelet[2757]: I0414 01:08:55.015209 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2389b192-c3d5-480f-b72a-94e73783396e-nginx-config\") pod \"2389b192-c3d5-480f-b72a-94e73783396e\" (UID: \"2389b192-c3d5-480f-b72a-94e73783396e\") " Apr 14 01:08:55.019868 kubelet[2757]: I0414 01:08:55.015227 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkvbn\" (UniqueName: \"kubernetes.io/projected/2389b192-c3d5-480f-b72a-94e73783396e-kube-api-access-lkvbn\") pod \"2389b192-c3d5-480f-b72a-94e73783396e\" (UID: \"2389b192-c3d5-480f-b72a-94e73783396e\") " Apr 14 01:08:55.020808 kubelet[2757]: I0414 01:08:55.020158 2757 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2389b192-c3d5-480f-b72a-94e73783396e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2389b192-c3d5-480f-b72a-94e73783396e" (UID: "2389b192-c3d5-480f-b72a-94e73783396e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 01:08:55.020808 kubelet[2757]: I0414 01:08:55.020224 2757 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2389b192-c3d5-480f-b72a-94e73783396e-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "2389b192-c3d5-480f-b72a-94e73783396e" (UID: "2389b192-c3d5-480f-b72a-94e73783396e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 01:08:55.039313 kubelet[2757]: I0414 01:08:55.038913 2757 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2389b192-c3d5-480f-b72a-94e73783396e-kube-api-access-lkvbn" (OuterVolumeSpecName: "kube-api-access-lkvbn") pod "2389b192-c3d5-480f-b72a-94e73783396e" (UID: "2389b192-c3d5-480f-b72a-94e73783396e"). InnerVolumeSpecName "kube-api-access-lkvbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 01:08:55.039313 kubelet[2757]: I0414 01:08:55.038911 2757 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2389b192-c3d5-480f-b72a-94e73783396e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2389b192-c3d5-480f-b72a-94e73783396e" (UID: "2389b192-c3d5-480f-b72a-94e73783396e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 14 01:08:55.117826 kubelet[2757]: I0414 01:08:55.116650 2757 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2389b192-c3d5-480f-b72a-94e73783396e-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 14 01:08:55.117826 kubelet[2757]: I0414 01:08:55.116760 2757 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lkvbn\" (UniqueName: \"kubernetes.io/projected/2389b192-c3d5-480f-b72a-94e73783396e-kube-api-access-lkvbn\") on node \"localhost\" DevicePath \"\"" Apr 14 01:08:55.117826 kubelet[2757]: I0414 01:08:55.116774 2757 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2389b192-c3d5-480f-b72a-94e73783396e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 14 01:08:55.117826 kubelet[2757]: I0414 01:08:55.116784 2757 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2389b192-c3d5-480f-b72a-94e73783396e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 14 01:08:55.322109 systemd[1]: Removed slice kubepods-besteffort-pod2389b192_c3d5_480f_b72a_94e73783396e.slice - libcontainer container kubepods-besteffort-pod2389b192_c3d5_480f_b72a_94e73783396e.slice. Apr 14 01:08:55.514670 systemd[1]: Created slice kubepods-besteffort-pod6c3742f0_69fc_45c0_9b7d_95a3e3b074b9.slice - libcontainer container kubepods-besteffort-pod6c3742f0_69fc_45c0_9b7d_95a3e3b074b9.slice. Apr 14 01:08:55.525411 kubelet[2757]: I0414 01:08:55.525365 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6c3742f0-69fc-45c0-9b7d-95a3e3b074b9-nginx-config\") pod \"whisker-77d98cdd77-46c5g\" (UID: \"6c3742f0-69fc-45c0-9b7d-95a3e3b074b9\") " pod="calico-system/whisker-77d98cdd77-46c5g" Apr 14 01:08:55.525552 kubelet[2757]: I0414 01:08:55.525429 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c3742f0-69fc-45c0-9b7d-95a3e3b074b9-whisker-backend-key-pair\") pod \"whisker-77d98cdd77-46c5g\" (UID: \"6c3742f0-69fc-45c0-9b7d-95a3e3b074b9\") " pod="calico-system/whisker-77d98cdd77-46c5g" Apr 14 01:08:55.525552 kubelet[2757]: I0414 01:08:55.525454 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3742f0-69fc-45c0-9b7d-95a3e3b074b9-whisker-ca-bundle\") pod \"whisker-77d98cdd77-46c5g\" (UID: \"6c3742f0-69fc-45c0-9b7d-95a3e3b074b9\") " pod="calico-system/whisker-77d98cdd77-46c5g" Apr 14 01:08:55.525552 kubelet[2757]: I0414 01:08:55.525506 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngfdk\" (UniqueName: \"kubernetes.io/projected/6c3742f0-69fc-45c0-9b7d-95a3e3b074b9-kube-api-access-ngfdk\") pod \"whisker-77d98cdd77-46c5g\" (UID: \"6c3742f0-69fc-45c0-9b7d-95a3e3b074b9\") " pod="calico-system/whisker-77d98cdd77-46c5g" Apr 14 01:08:55.545626 systemd-networkd[1376]: calie64108eeb08: Link UP Apr 14 01:08:55.545857 systemd-networkd[1376]: calie64108eeb08: Gained carrier Apr 14 01:08:55.563630 systemd[1]: run-netns-cni\x2d24412bce\x2dad6b\x2d5c3a\x2df6ac\x2d398d7c7f263a.mount: Deactivated successfully. Apr 14 01:08:55.563733 systemd[1]: var-lib-kubelet-pods-2389b192\x2dc3d5\x2d480f\x2db72a\x2d94e73783396e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlkvbn.mount: Deactivated successfully. Apr 14 01:08:55.563813 systemd[1]: var-lib-kubelet-pods-2389b192\x2dc3d5\x2d480f\x2db72a\x2d94e73783396e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.061 [ERROR][4333] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.113 [INFO][4333] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0 calico-apiserver-6b98884ffd- calico-system 16f4d282-7525-45fa-9798-7274ea91b7f6 1201 0 2026-04-14 01:08:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b98884ffd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b98884ffd-t64fc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie64108eeb08 [] [] }} ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-t64fc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.114 [INFO][4333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-t64fc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.277 [INFO][4383] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" HandleID="k8s-pod-network.b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.305 [INFO][4383] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" HandleID="k8s-pod-network.b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044a260), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6b98884ffd-t64fc", "timestamp":"2026-04-14 01:08:55.277851812 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036a2c0)} Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.305 [INFO][4383] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.305 [INFO][4383] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.306 [INFO][4383] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.318 [INFO][4383] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" host="localhost" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.367 [INFO][4383] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.409 [INFO][4383] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.421 [INFO][4383] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.445 [INFO][4383] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.446 [INFO][4383] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" host="localhost" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.452 [INFO][4383] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.469 [INFO][4383] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" host="localhost" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.503 [INFO][4383] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" host="localhost" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.505 [INFO][4383] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" host="localhost" Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.507 [INFO][4383] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:08:55.576984 containerd[1483]: 2026-04-14 01:08:55.507 [INFO][4383] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" HandleID="k8s-pod-network.b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:08:55.579412 containerd[1483]: 2026-04-14 01:08:55.523 [INFO][4333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-t64fc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0", GenerateName:"calico-apiserver-6b98884ffd-", Namespace:"calico-system", SelfLink:"", UID:"16f4d282-7525-45fa-9798-7274ea91b7f6", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b98884ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b98884ffd-t64fc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie64108eeb08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:08:55.579412 containerd[1483]: 2026-04-14 01:08:55.523 [INFO][4333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-t64fc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:08:55.579412 containerd[1483]: 2026-04-14 01:08:55.523 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie64108eeb08 ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-t64fc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:08:55.579412 containerd[1483]: 2026-04-14 01:08:55.547 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-t64fc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:08:55.579412 containerd[1483]: 2026-04-14 01:08:55.548 [INFO][4333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-t64fc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0", GenerateName:"calico-apiserver-6b98884ffd-", Namespace:"calico-system", SelfLink:"", UID:"16f4d282-7525-45fa-9798-7274ea91b7f6", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b98884ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f", Pod:"calico-apiserver-6b98884ffd-t64fc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie64108eeb08", MAC:"4e:d3:b9:00:d4:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:08:55.579412 containerd[1483]: 2026-04-14 01:08:55.570 [INFO][4333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-t64fc" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:08:55.596738 systemd-networkd[1376]: cali7b9278b0d40: Link UP Apr 14 01:08:55.596981 systemd-networkd[1376]: cali7b9278b0d40: Gained carrier Apr 14 01:08:55.614252 containerd[1483]: time="2026-04-14T01:08:55.612640313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:08:55.614252 containerd[1483]: time="2026-04-14T01:08:55.612697088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:08:55.614252 containerd[1483]: time="2026-04-14T01:08:55.612716920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:55.614252 containerd[1483]: time="2026-04-14T01:08:55.613663351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:54.913 [ERROR][4305] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.035 [INFO][4305] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--nf2k5-eth0 goldmane-5b85766d88- calico-system 7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7 1197 0 2026-04-14 01:08:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-nf2k5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7b9278b0d40 [] [] }} ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Namespace="calico-system" Pod="goldmane-5b85766d88-nf2k5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nf2k5-" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.035 [INFO][4305] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Namespace="calico-system" Pod="goldmane-5b85766d88-nf2k5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.309 [INFO][4367] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" HandleID="k8s-pod-network.e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.361 [INFO][4367] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" HandleID="k8s-pod-network.e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001481f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-nf2k5", "timestamp":"2026-04-14 01:08:55.309730743 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000554000)} Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.362 [INFO][4367] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.507 [INFO][4367] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.508 [INFO][4367] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.524 [INFO][4367] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" host="localhost" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.541 [INFO][4367] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.547 [INFO][4367] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.560 [INFO][4367] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.569 [INFO][4367] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.570 [INFO][4367] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" host="localhost" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.575 [INFO][4367] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.580 [INFO][4367] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" host="localhost" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.587 [INFO][4367] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" host="localhost" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.587 [INFO][4367] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" host="localhost" Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.587 [INFO][4367] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:08:55.656084 containerd[1483]: 2026-04-14 01:08:55.587 [INFO][4367] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" HandleID="k8s-pod-network.e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:08:55.656703 containerd[1483]: 2026-04-14 01:08:55.593 [INFO][4305] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Namespace="calico-system" Pod="goldmane-5b85766d88-nf2k5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--nf2k5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-nf2k5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b9278b0d40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:08:55.656703 containerd[1483]: 2026-04-14 01:08:55.593 [INFO][4305] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Namespace="calico-system" Pod="goldmane-5b85766d88-nf2k5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:08:55.656703 containerd[1483]: 2026-04-14 01:08:55.593 [INFO][4305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b9278b0d40 ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Namespace="calico-system" Pod="goldmane-5b85766d88-nf2k5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:08:55.656703 containerd[1483]: 2026-04-14 01:08:55.597 [INFO][4305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Namespace="calico-system" Pod="goldmane-5b85766d88-nf2k5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:08:55.656703 containerd[1483]: 2026-04-14 01:08:55.601 [INFO][4305] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Namespace="calico-system" Pod="goldmane-5b85766d88-nf2k5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--nf2k5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc", Pod:"goldmane-5b85766d88-nf2k5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b9278b0d40", MAC:"e6:d5:c9:e8:0e:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:08:55.656703 containerd[1483]: 2026-04-14 01:08:55.652 [INFO][4305] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc" Namespace="calico-system" Pod="goldmane-5b85766d88-nf2k5" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:08:55.667015 systemd[1]: Started cri-containerd-b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f.scope - libcontainer container b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f. Apr 14 01:08:55.699276 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:08:55.703290 systemd-networkd[1376]: cali6560054bd4f: Link UP Apr 14 01:08:55.703704 systemd-networkd[1376]: cali6560054bd4f: Gained carrier Apr 14 01:08:55.706533 containerd[1483]: time="2026-04-14T01:08:55.706391990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:08:55.706761 containerd[1483]: time="2026-04-14T01:08:55.706698754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:08:55.706884 containerd[1483]: time="2026-04-14T01:08:55.706859857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:55.707322 containerd[1483]: time="2026-04-14T01:08:55.707261022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.019 [ERROR][4311] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.061 [INFO][4311] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4k4bw-eth0 csi-node-driver- calico-system ebecdfa2-d197-4725-b3b6-ab6cb5334f6e 1195 0 2026-04-14 01:08:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4k4bw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6560054bd4f [] [] }} ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Namespace="calico-system" Pod="csi-node-driver-4k4bw" WorkloadEndpoint="localhost-k8s-csi--node--driver--4k4bw-" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.061 [INFO][4311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Namespace="calico-system" Pod="csi-node-driver-4k4bw" WorkloadEndpoint="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.385 [INFO][4374] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" HandleID="k8s-pod-network.3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.401 [INFO][4374] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" HandleID="k8s-pod-network.3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eb90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4k4bw", "timestamp":"2026-04-14 01:08:55.385505553 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000140dc0)} Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.401 [INFO][4374] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.587 [INFO][4374] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.587 [INFO][4374] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.632 [INFO][4374] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" host="localhost" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.654 [INFO][4374] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.661 [INFO][4374] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.667 [INFO][4374] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.671 [INFO][4374] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.672 [INFO][4374] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" host="localhost" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.676 [INFO][4374] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.684 [INFO][4374] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" host="localhost" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.694 [INFO][4374] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" host="localhost" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.694 [INFO][4374] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" host="localhost" Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.694 [INFO][4374] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:08:55.740831 containerd[1483]: 2026-04-14 01:08:55.694 [INFO][4374] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" HandleID="k8s-pod-network.3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:08:55.741688 containerd[1483]: 2026-04-14 01:08:55.698 [INFO][4311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Namespace="calico-system" Pod="csi-node-driver-4k4bw" WorkloadEndpoint="localhost-k8s-csi--node--driver--4k4bw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4k4bw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebecdfa2-d197-4725-b3b6-ab6cb5334f6e", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4k4bw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6560054bd4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:08:55.741688 containerd[1483]: 2026-04-14 01:08:55.698 [INFO][4311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Namespace="calico-system" Pod="csi-node-driver-4k4bw" WorkloadEndpoint="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:08:55.741688 containerd[1483]: 2026-04-14 01:08:55.698 [INFO][4311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6560054bd4f ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Namespace="calico-system" Pod="csi-node-driver-4k4bw" WorkloadEndpoint="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:08:55.741688 containerd[1483]: 2026-04-14 01:08:55.703 [INFO][4311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Namespace="calico-system" Pod="csi-node-driver-4k4bw" WorkloadEndpoint="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:08:55.741688 containerd[1483]: 2026-04-14 01:08:55.703 [INFO][4311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Namespace="calico-system" Pod="csi-node-driver-4k4bw" WorkloadEndpoint="localhost-k8s-csi--node--driver--4k4bw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4k4bw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebecdfa2-d197-4725-b3b6-ab6cb5334f6e", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f", Pod:"csi-node-driver-4k4bw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6560054bd4f", MAC:"66:7b:a1:2a:ad:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:08:55.741688 containerd[1483]: 2026-04-14 01:08:55.726 [INFO][4311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f" Namespace="calico-system" Pod="csi-node-driver-4k4bw" WorkloadEndpoint="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:08:55.746819 systemd[1]: Started cri-containerd-e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc.scope - libcontainer container e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc. Apr 14 01:08:55.750611 containerd[1483]: time="2026-04-14T01:08:55.750222949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98884ffd-t64fc,Uid:16f4d282-7525-45fa-9798-7274ea91b7f6,Namespace:calico-system,Attempt:1,} returns sandbox id \"b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f\"" Apr 14 01:08:55.755369 containerd[1483]: time="2026-04-14T01:08:55.751645278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 14 01:08:55.779901 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:08:55.792719 containerd[1483]: time="2026-04-14T01:08:55.792411852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:08:55.792719 containerd[1483]: time="2026-04-14T01:08:55.792456756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:08:55.792719 containerd[1483]: time="2026-04-14T01:08:55.792465154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:55.792719 containerd[1483]: time="2026-04-14T01:08:55.792619038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:55.814595 systemd[1]: Started cri-containerd-3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f.scope - libcontainer container 3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f. Apr 14 01:08:55.815644 systemd-networkd[1376]: cali3ec500ca478: Link UP Apr 14 01:08:55.816676 systemd-networkd[1376]: cali3ec500ca478: Gained carrier Apr 14 01:08:55.827298 containerd[1483]: time="2026-04-14T01:08:55.827256165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77d98cdd77-46c5g,Uid:6c3742f0-69fc-45c0-9b7d-95a3e3b074b9,Namespace:calico-system,Attempt:0,}" Apr 14 01:08:55.831268 containerd[1483]: time="2026-04-14T01:08:55.831186268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nf2k5,Uid:7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc\"" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.099 [ERROR][4348] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.199 [INFO][4348] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0 calico-kube-controllers-7c8dbf654f- calico-system 10b78b6d-5df6-4ebd-9b0e-7c5c5e956100 1200 0 2026-04-14 01:08:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c8dbf654f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c8dbf654f-qm855 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3ec500ca478 [] [] }} ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Namespace="calico-system" Pod="calico-kube-controllers-7c8dbf654f-qm855" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.199 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Namespace="calico-system" Pod="calico-kube-controllers-7c8dbf654f-qm855" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.405 [INFO][4390] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" HandleID="k8s-pod-network.d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.437 [INFO][4390] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" HandleID="k8s-pod-network.d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003820b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c8dbf654f-qm855", "timestamp":"2026-04-14 01:08:55.40565198 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ff1e0)} Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.438 [INFO][4390] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.694 [INFO][4390] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.694 [INFO][4390] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.724 [INFO][4390] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" host="localhost" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.770 [INFO][4390] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.781 [INFO][4390] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.785 [INFO][4390] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.788 [INFO][4390] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.788 [INFO][4390] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" host="localhost" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.790 [INFO][4390] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.795 [INFO][4390] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" host="localhost" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.805 [INFO][4390] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" host="localhost" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.805 [INFO][4390] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" host="localhost" Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.805 [INFO][4390] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:08:55.858393 containerd[1483]: 2026-04-14 01:08:55.805 [INFO][4390] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" HandleID="k8s-pod-network.d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:08:55.859444 containerd[1483]: 2026-04-14 01:08:55.810 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Namespace="calico-system" Pod="calico-kube-controllers-7c8dbf654f-qm855" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0", GenerateName:"calico-kube-controllers-7c8dbf654f-", Namespace:"calico-system", SelfLink:"", UID:"10b78b6d-5df6-4ebd-9b0e-7c5c5e956100", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8dbf654f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c8dbf654f-qm855", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ec500ca478", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:08:55.859444 containerd[1483]: 2026-04-14 01:08:55.810 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Namespace="calico-system" Pod="calico-kube-controllers-7c8dbf654f-qm855" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:08:55.859444 containerd[1483]: 2026-04-14 01:08:55.810 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ec500ca478 ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Namespace="calico-system" Pod="calico-kube-controllers-7c8dbf654f-qm855" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:08:55.859444 containerd[1483]: 2026-04-14 01:08:55.822 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Namespace="calico-system" Pod="calico-kube-controllers-7c8dbf654f-qm855" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:08:55.859444 containerd[1483]: 2026-04-14 01:08:55.823 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Namespace="calico-system" Pod="calico-kube-controllers-7c8dbf654f-qm855" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0", GenerateName:"calico-kube-controllers-7c8dbf654f-", Namespace:"calico-system", SelfLink:"", UID:"10b78b6d-5df6-4ebd-9b0e-7c5c5e956100", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8dbf654f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b", Pod:"calico-kube-controllers-7c8dbf654f-qm855", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ec500ca478", MAC:"4a:72:cf:23:a3:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:08:55.859444 containerd[1483]: 2026-04-14 01:08:55.851 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b" Namespace="calico-system" Pod="calico-kube-controllers-7c8dbf654f-qm855" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:08:55.859453 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:08:55.912862 containerd[1483]: time="2026-04-14T01:08:55.912802624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k4bw,Uid:ebecdfa2-d197-4725-b3b6-ab6cb5334f6e,Namespace:calico-system,Attempt:1,} returns sandbox id \"3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f\"" Apr 14 01:08:55.969228 containerd[1483]: time="2026-04-14T01:08:55.967248345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:08:55.969228 containerd[1483]: time="2026-04-14T01:08:55.967357742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:08:55.969228 containerd[1483]: time="2026-04-14T01:08:55.967373000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:55.969228 containerd[1483]: time="2026-04-14T01:08:55.967445997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:56.066198 systemd[1]: Started cri-containerd-d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b.scope - libcontainer container d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b. Apr 14 01:08:56.179655 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:08:56.242563 kubelet[2757]: I0414 01:08:56.242214 2757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2389b192-c3d5-480f-b72a-94e73783396e" path="/var/lib/kubelet/pods/2389b192-c3d5-480f-b72a-94e73783396e/volumes" Apr 14 01:08:56.279451 containerd[1483]: time="2026-04-14T01:08:56.279381023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8dbf654f-qm855,Uid:10b78b6d-5df6-4ebd-9b0e-7c5c5e956100,Namespace:calico-system,Attempt:1,} returns sandbox id \"d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b\"" Apr 14 01:08:56.398879 systemd-networkd[1376]: cali8e4620d49c8: Link UP Apr 14 01:08:56.400554 systemd-networkd[1376]: cali8e4620d49c8: Gained carrier Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.014 [ERROR][4583] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.045 [INFO][4583] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77d98cdd77--46c5g-eth0 whisker-77d98cdd77- calico-system 6c3742f0-69fc-45c0-9b7d-95a3e3b074b9 1227 0 2026-04-14 01:08:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77d98cdd77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77d98cdd77-46c5g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8e4620d49c8 [] [] }} ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Namespace="calico-system" Pod="whisker-77d98cdd77-46c5g" WorkloadEndpoint="localhost-k8s-whisker--77d98cdd77--46c5g-" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.045 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Namespace="calico-system" Pod="whisker-77d98cdd77-46c5g" WorkloadEndpoint="localhost-k8s-whisker--77d98cdd77--46c5g-eth0" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.199 [INFO][4672] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" HandleID="k8s-pod-network.10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Workload="localhost-k8s-whisker--77d98cdd77--46c5g-eth0" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.225 [INFO][4672] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" HandleID="k8s-pod-network.10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Workload="localhost-k8s-whisker--77d98cdd77--46c5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000482b40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77d98cdd77-46c5g", "timestamp":"2026-04-14 01:08:56.19955887 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005d2f20)} Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.225 [INFO][4672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.226 [INFO][4672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.226 [INFO][4672] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.244 [INFO][4672] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" host="localhost" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.287 [INFO][4672] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.293 [INFO][4672] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.301 [INFO][4672] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.313 [INFO][4672] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.314 [INFO][4672] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" host="localhost" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.320 [INFO][4672] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.329 [INFO][4672] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" host="localhost" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.354 [INFO][4672] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" host="localhost" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.356 [INFO][4672] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" host="localhost" Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.357 [INFO][4672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:08:56.423352 containerd[1483]: 2026-04-14 01:08:56.357 [INFO][4672] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" HandleID="k8s-pod-network.10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Workload="localhost-k8s-whisker--77d98cdd77--46c5g-eth0" Apr 14 01:08:56.430552 containerd[1483]: 2026-04-14 01:08:56.379 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Namespace="calico-system" Pod="whisker-77d98cdd77-46c5g" WorkloadEndpoint="localhost-k8s-whisker--77d98cdd77--46c5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77d98cdd77--46c5g-eth0", GenerateName:"whisker-77d98cdd77-", Namespace:"calico-system", SelfLink:"", UID:"6c3742f0-69fc-45c0-9b7d-95a3e3b074b9", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77d98cdd77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77d98cdd77-46c5g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8e4620d49c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:08:56.430552 containerd[1483]: 2026-04-14 01:08:56.379 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Namespace="calico-system" Pod="whisker-77d98cdd77-46c5g" WorkloadEndpoint="localhost-k8s-whisker--77d98cdd77--46c5g-eth0" Apr 14 01:08:56.430552 containerd[1483]: 2026-04-14 01:08:56.379 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e4620d49c8 ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Namespace="calico-system" Pod="whisker-77d98cdd77-46c5g" WorkloadEndpoint="localhost-k8s-whisker--77d98cdd77--46c5g-eth0" Apr 14 01:08:56.430552 containerd[1483]: 2026-04-14 01:08:56.401 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Namespace="calico-system" Pod="whisker-77d98cdd77-46c5g" WorkloadEndpoint="localhost-k8s-whisker--77d98cdd77--46c5g-eth0" Apr 14 01:08:56.430552 containerd[1483]: 2026-04-14 01:08:56.403 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Namespace="calico-system" Pod="whisker-77d98cdd77-46c5g" WorkloadEndpoint="localhost-k8s-whisker--77d98cdd77--46c5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77d98cdd77--46c5g-eth0", GenerateName:"whisker-77d98cdd77-", Namespace:"calico-system", SelfLink:"", UID:"6c3742f0-69fc-45c0-9b7d-95a3e3b074b9", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77d98cdd77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e", Pod:"whisker-77d98cdd77-46c5g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8e4620d49c8", MAC:"f2:81:f0:86:74:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:08:56.430552 containerd[1483]: 2026-04-14 01:08:56.419 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e" Namespace="calico-system" Pod="whisker-77d98cdd77-46c5g" WorkloadEndpoint="localhost-k8s-whisker--77d98cdd77--46c5g-eth0" Apr 14 01:08:56.496144 containerd[1483]: time="2026-04-14T01:08:56.495622146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:08:56.496144 containerd[1483]: time="2026-04-14T01:08:56.495845832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:08:56.496144 containerd[1483]: time="2026-04-14T01:08:56.495864809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:56.496700 containerd[1483]: time="2026-04-14T01:08:56.496154845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:08:56.526072 systemd[1]: Started cri-containerd-10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e.scope - libcontainer container 10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e. Apr 14 01:08:56.548706 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:08:56.565033 systemd-networkd[1376]: calie64108eeb08: Gained IPv6LL Apr 14 01:08:56.591368 containerd[1483]: time="2026-04-14T01:08:56.591293748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77d98cdd77-46c5g,Uid:6c3742f0-69fc-45c0-9b7d-95a3e3b074b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e\"" Apr 14 01:08:56.612012 kernel: calico-node[4621]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 14 01:08:56.625286 systemd-networkd[1376]: cali7b9278b0d40: Gained IPv6LL Apr 14 01:08:56.944459 systemd-networkd[1376]: cali3ec500ca478: Gained IPv6LL Apr 14 01:08:57.311379 systemd-networkd[1376]: vxlan.calico: Link UP Apr 14 01:08:57.311389 systemd-networkd[1376]: vxlan.calico: Gained carrier Apr 14 01:08:57.329527 systemd-networkd[1376]: cali6560054bd4f: Gained IPv6LL Apr 14 01:08:58.033462 systemd-networkd[1376]: cali8e4620d49c8: Gained IPv6LL Apr 14 01:08:58.500377 containerd[1483]: time="2026-04-14T01:08:58.500270610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:58.501276 containerd[1483]: time="2026-04-14T01:08:58.500758532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 14 01:08:58.501858 containerd[1483]: time="2026-04-14T01:08:58.501811388Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:58.503796 containerd[1483]: time="2026-04-14T01:08:58.503756581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:08:58.504352 containerd[1483]: time="2026-04-14T01:08:58.504318746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.752633973s" Apr 14 01:08:58.504386 containerd[1483]: time="2026-04-14T01:08:58.504350863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 14 01:08:58.505358 containerd[1483]: time="2026-04-14T01:08:58.505336027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 14 01:08:58.512771 containerd[1483]: time="2026-04-14T01:08:58.512674979Z" level=info msg="CreateContainer within sandbox \"b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 14 01:08:58.532709 containerd[1483]: time="2026-04-14T01:08:58.530904148Z" level=info msg="CreateContainer within sandbox \"b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8fd59c272a7778599d4f10ffa44dd5c69654b9e234a6eacedf1c7d7e52c9ffc0\"" Apr 14 01:08:58.534027 containerd[1483]: time="2026-04-14T01:08:58.533969690Z" level=info msg="StartContainer for \"8fd59c272a7778599d4f10ffa44dd5c69654b9e234a6eacedf1c7d7e52c9ffc0\"" Apr 14 01:08:58.590744 systemd[1]: run-containerd-runc-k8s.io-8fd59c272a7778599d4f10ffa44dd5c69654b9e234a6eacedf1c7d7e52c9ffc0-runc.yLwZKx.mount: Deactivated successfully. Apr 14 01:08:58.600023 systemd[1]: Started cri-containerd-8fd59c272a7778599d4f10ffa44dd5c69654b9e234a6eacedf1c7d7e52c9ffc0.scope - libcontainer container 8fd59c272a7778599d4f10ffa44dd5c69654b9e234a6eacedf1c7d7e52c9ffc0. Apr 14 01:08:58.649736 containerd[1483]: time="2026-04-14T01:08:58.649621891Z" level=info msg="StartContainer for \"8fd59c272a7778599d4f10ffa44dd5c69654b9e234a6eacedf1c7d7e52c9ffc0\" returns successfully" Apr 14 01:08:58.939470 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:52850.service - OpenSSH per-connection server daemon (10.0.0.1:52850). Apr 14 01:08:59.003189 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 52850 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:08:59.006985 sshd[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:08:59.010628 systemd-logind[1472]: New session 13 of user core. Apr 14 01:08:59.022118 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 01:08:59.120974 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Apr 14 01:08:59.191627 sshd[4984]: pam_unix(sshd:session): session closed for user core Apr 14 01:08:59.208150 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:52850.service: Deactivated successfully. Apr 14 01:08:59.210708 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 01:08:59.212047 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Apr 14 01:08:59.222344 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:52856.service - OpenSSH per-connection server daemon (10.0.0.1:52856). Apr 14 01:08:59.224822 systemd-logind[1472]: Removed session 13. Apr 14 01:08:59.286914 sshd[5004]: Accepted publickey for core from 10.0.0.1 port 52856 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:08:59.294969 sshd[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:08:59.337243 systemd-logind[1472]: New session 14 of user core. Apr 14 01:08:59.353967 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 01:08:59.596053 sshd[5004]: pam_unix(sshd:session): session closed for user core Apr 14 01:08:59.624641 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:52856.service: Deactivated successfully. Apr 14 01:08:59.632354 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 01:08:59.634131 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Apr 14 01:08:59.647805 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:52858.service - OpenSSH per-connection server daemon (10.0.0.1:52858). Apr 14 01:08:59.648852 systemd-logind[1472]: Removed session 14. Apr 14 01:08:59.730617 sshd[5021]: Accepted publickey for core from 10.0.0.1 port 52858 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:08:59.731410 sshd[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:08:59.747319 systemd-logind[1472]: New session 15 of user core. Apr 14 01:08:59.761274 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 01:08:59.950342 sshd[5021]: pam_unix(sshd:session): session closed for user core Apr 14 01:08:59.960667 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:52858.service: Deactivated successfully. Apr 14 01:08:59.977784 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 01:08:59.982300 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Apr 14 01:08:59.987312 systemd-logind[1472]: Removed session 15. Apr 14 01:09:00.071288 containerd[1483]: time="2026-04-14T01:09:00.071176842Z" level=info msg="StopPodSandbox for \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\"" Apr 14 01:09:00.353538 kubelet[2757]: I0414 01:09:00.352270 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b98884ffd-t64fc" podStartSLOduration=57.598428327 podStartE2EDuration="1m0.352255759s" podCreationTimestamp="2026-04-14 01:08:00 +0000 UTC" firstStartedPulling="2026-04-14 01:08:55.751376328 +0000 UTC m=+116.925490245" lastFinishedPulling="2026-04-14 01:08:58.505203759 +0000 UTC m=+119.679317677" observedRunningTime="2026-04-14 01:08:59.398894643 +0000 UTC m=+120.573008561" watchObservedRunningTime="2026-04-14 01:09:00.352255759 +0000 UTC m=+121.526369681" Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.247 [WARNING][5045] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0", GenerateName:"calico-kube-controllers-7c8dbf654f-", Namespace:"calico-system", SelfLink:"", UID:"10b78b6d-5df6-4ebd-9b0e-7c5c5e956100", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8dbf654f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b", Pod:"calico-kube-controllers-7c8dbf654f-qm855", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ec500ca478", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.249 [INFO][5045] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.249 [INFO][5045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" iface="eth0" netns="" Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.249 [INFO][5045] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.249 [INFO][5045] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.342 [INFO][5058] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" HandleID="k8s-pod-network.8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.342 [INFO][5058] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.343 [INFO][5058] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.361 [WARNING][5058] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" HandleID="k8s-pod-network.8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.361 [INFO][5058] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" HandleID="k8s-pod-network.8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.364 [INFO][5058] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:00.372386 containerd[1483]: 2026-04-14 01:09:00.366 [INFO][5045] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:09:00.372386 containerd[1483]: time="2026-04-14T01:09:00.372223013Z" level=info msg="TearDown network for sandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\" successfully" Apr 14 01:09:00.372386 containerd[1483]: time="2026-04-14T01:09:00.372244878Z" level=info msg="StopPodSandbox for \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\" returns successfully" Apr 14 01:09:00.373576 containerd[1483]: time="2026-04-14T01:09:00.373426037Z" level=info msg="RemovePodSandbox for \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\"" Apr 14 01:09:00.373576 containerd[1483]: time="2026-04-14T01:09:00.373460304Z" level=info msg="Forcibly stopping sandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\"" Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.458 [WARNING][5079] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0", GenerateName:"calico-kube-controllers-7c8dbf654f-", Namespace:"calico-system", SelfLink:"", UID:"10b78b6d-5df6-4ebd-9b0e-7c5c5e956100", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8dbf654f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b", Pod:"calico-kube-controllers-7c8dbf654f-qm855", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ec500ca478", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.458 [INFO][5079] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.458 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" iface="eth0" netns="" Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.458 [INFO][5079] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.458 [INFO][5079] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.481 [INFO][5087] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" HandleID="k8s-pod-network.8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.481 [INFO][5087] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.481 [INFO][5087] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.500 [WARNING][5087] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" HandleID="k8s-pod-network.8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.500 [INFO][5087] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" HandleID="k8s-pod-network.8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Workload="localhost-k8s-calico--kube--controllers--7c8dbf654f--qm855-eth0" Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.508 [INFO][5087] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:00.512209 containerd[1483]: 2026-04-14 01:09:00.509 [INFO][5079] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e" Apr 14 01:09:00.513041 containerd[1483]: time="2026-04-14T01:09:00.512243121Z" level=info msg="TearDown network for sandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\" successfully" Apr 14 01:09:00.522449 containerd[1483]: time="2026-04-14T01:09:00.522321803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:09:00.522449 containerd[1483]: time="2026-04-14T01:09:00.522484018Z" level=info msg="RemovePodSandbox \"8e9a19a3bce9f7328a3f8163906a1f68f6428448810d4432218d96a23c2e039e\" returns successfully" Apr 14 01:09:00.524206 containerd[1483]: time="2026-04-14T01:09:00.524158701Z" level=info msg="StopPodSandbox for \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\"" Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.586 [WARNING][5103] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--nf2k5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7", ResourceVersion:"1232", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc", Pod:"goldmane-5b85766d88-nf2k5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b9278b0d40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.586 [INFO][5103] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.586 [INFO][5103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" iface="eth0" netns="" Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.586 [INFO][5103] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.586 [INFO][5103] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.612 [INFO][5111] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" HandleID="k8s-pod-network.d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.612 [INFO][5111] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.613 [INFO][5111] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.622 [WARNING][5111] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" HandleID="k8s-pod-network.d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.622 [INFO][5111] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" HandleID="k8s-pod-network.d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.624 [INFO][5111] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:00.629224 containerd[1483]: 2026-04-14 01:09:00.625 [INFO][5103] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:09:00.629224 containerd[1483]: time="2026-04-14T01:09:00.629039675Z" level=info msg="TearDown network for sandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\" successfully" Apr 14 01:09:00.629224 containerd[1483]: time="2026-04-14T01:09:00.629130351Z" level=info msg="StopPodSandbox for \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\" returns successfully" Apr 14 01:09:00.629836 containerd[1483]: time="2026-04-14T01:09:00.629813393Z" level=info msg="RemovePodSandbox for \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\"" Apr 14 01:09:00.629874 containerd[1483]: time="2026-04-14T01:09:00.629852617Z" level=info msg="Forcibly stopping sandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\"" Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.683 [WARNING][5129] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--nf2k5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7b9a1088-c5ea-4bab-bffb-1a8c0f0d12a7", ResourceVersion:"1232", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc", Pod:"goldmane-5b85766d88-nf2k5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b9278b0d40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.684 [INFO][5129] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.684 [INFO][5129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" iface="eth0" netns="" Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.684 [INFO][5129] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.684 [INFO][5129] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.706 [INFO][5139] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" HandleID="k8s-pod-network.d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.706 [INFO][5139] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.706 [INFO][5139] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.713 [WARNING][5139] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" HandleID="k8s-pod-network.d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.713 [INFO][5139] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" HandleID="k8s-pod-network.d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Workload="localhost-k8s-goldmane--5b85766d88--nf2k5-eth0" Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.714 [INFO][5139] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:00.721639 containerd[1483]: 2026-04-14 01:09:00.716 [INFO][5129] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b" Apr 14 01:09:00.721639 containerd[1483]: time="2026-04-14T01:09:00.720378510Z" level=info msg="TearDown network for sandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\" successfully" Apr 14 01:09:00.734948 containerd[1483]: time="2026-04-14T01:09:00.734756367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:09:00.734948 containerd[1483]: time="2026-04-14T01:09:00.734875521Z" level=info msg="RemovePodSandbox \"d5787a02d249365b1ea184da48560980affe79bd8a37c32f11599e07a40a1c6b\" returns successfully" Apr 14 01:09:00.736123 containerd[1483]: time="2026-04-14T01:09:00.735778723Z" level=info msg="StopPodSandbox for \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\"" Apr 14 01:09:00.861704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351031659.mount: Deactivated successfully. Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.794 [WARNING][5156] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" WorkloadEndpoint="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.795 [INFO][5156] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.795 [INFO][5156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" iface="eth0" netns="" Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.795 [INFO][5156] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.795 [INFO][5156] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.847 [INFO][5165] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" HandleID="k8s-pod-network.8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Workload="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.849 [INFO][5165] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.849 [INFO][5165] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.863 [WARNING][5165] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" HandleID="k8s-pod-network.8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Workload="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.864 [INFO][5165] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" HandleID="k8s-pod-network.8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Workload="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.868 [INFO][5165] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:00.875194 containerd[1483]: 2026-04-14 01:09:00.871 [INFO][5156] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:09:00.875194 containerd[1483]: time="2026-04-14T01:09:00.875282394Z" level=info msg="TearDown network for sandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\" successfully" Apr 14 01:09:00.875194 containerd[1483]: time="2026-04-14T01:09:00.875315684Z" level=info msg="StopPodSandbox for \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\" returns successfully" Apr 14 01:09:00.877758 containerd[1483]: time="2026-04-14T01:09:00.877578203Z" level=info msg="RemovePodSandbox for \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\"" Apr 14 01:09:00.877758 containerd[1483]: time="2026-04-14T01:09:00.877618189Z" level=info msg="Forcibly stopping sandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\"" Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.929 [WARNING][5183] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" WorkloadEndpoint="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.930 [INFO][5183] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.930 [INFO][5183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" iface="eth0" netns="" Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.930 [INFO][5183] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.930 [INFO][5183] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.977 [INFO][5195] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" HandleID="k8s-pod-network.8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Workload="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.977 [INFO][5195] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.977 [INFO][5195] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.987 [WARNING][5195] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" HandleID="k8s-pod-network.8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Workload="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.987 [INFO][5195] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" HandleID="k8s-pod-network.8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Workload="localhost-k8s-whisker--597645cf5b--qdbbf-eth0" Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.991 [INFO][5195] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:01.002755 containerd[1483]: 2026-04-14 01:09:00.994 [INFO][5183] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf" Apr 14 01:09:01.002755 containerd[1483]: time="2026-04-14T01:09:00.998452361Z" level=info msg="TearDown network for sandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\" successfully" Apr 14 01:09:01.022366 containerd[1483]: time="2026-04-14T01:09:01.022236500Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:09:01.023218 containerd[1483]: time="2026-04-14T01:09:01.023019019Z" level=info msg="RemovePodSandbox \"8907f4f4a63073bd946a18a1c01cbe993f9ae95348196df52100a1c9cafb19cf\" returns successfully" Apr 14 01:09:01.024070 containerd[1483]: time="2026-04-14T01:09:01.023796541Z" level=info msg="StopPodSandbox for \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\"" Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.080 [WARNING][5211] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4k4bw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebecdfa2-d197-4725-b3b6-ab6cb5334f6e", ResourceVersion:"1235", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f", Pod:"csi-node-driver-4k4bw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6560054bd4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.081 [INFO][5211] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.081 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" iface="eth0" netns="" Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.081 [INFO][5211] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.081 [INFO][5211] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.134 [INFO][5219] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" HandleID="k8s-pod-network.0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.136 [INFO][5219] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.136 [INFO][5219] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.146 [WARNING][5219] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" HandleID="k8s-pod-network.0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.146 [INFO][5219] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" HandleID="k8s-pod-network.0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.156 [INFO][5219] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:01.160830 containerd[1483]: 2026-04-14 01:09:01.158 [INFO][5211] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:09:01.160830 containerd[1483]: time="2026-04-14T01:09:01.160781666Z" level=info msg="TearDown network for sandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\" successfully" Apr 14 01:09:01.160830 containerd[1483]: time="2026-04-14T01:09:01.160805437Z" level=info msg="StopPodSandbox for \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\" returns successfully" Apr 14 01:09:01.162029 containerd[1483]: time="2026-04-14T01:09:01.161965882Z" level=info msg="RemovePodSandbox for \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\"" Apr 14 01:09:01.162029 containerd[1483]: time="2026-04-14T01:09:01.162003717Z" level=info msg="Forcibly stopping sandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\"" Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.237 [WARNING][5237] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4k4bw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebecdfa2-d197-4725-b3b6-ab6cb5334f6e", ResourceVersion:"1235", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f", Pod:"csi-node-driver-4k4bw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6560054bd4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.238 [INFO][5237] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.238 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" iface="eth0" netns="" Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.238 [INFO][5237] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.238 [INFO][5237] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.267 [INFO][5245] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" HandleID="k8s-pod-network.0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.269 [INFO][5245] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.270 [INFO][5245] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.283 [WARNING][5245] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" HandleID="k8s-pod-network.0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.284 [INFO][5245] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" HandleID="k8s-pod-network.0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Workload="localhost-k8s-csi--node--driver--4k4bw-eth0" Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.291 [INFO][5245] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:01.295872 containerd[1483]: 2026-04-14 01:09:01.294 [INFO][5237] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e" Apr 14 01:09:01.295872 containerd[1483]: time="2026-04-14T01:09:01.295743021Z" level=info msg="TearDown network for sandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\" successfully" Apr 14 01:09:01.303003 containerd[1483]: time="2026-04-14T01:09:01.302118933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:09:01.303003 containerd[1483]: time="2026-04-14T01:09:01.302198183Z" level=info msg="RemovePodSandbox \"0f3dba28a5a3ad6dd960ef1655001fc8d5d153777788250191350aa331456d4e\" returns successfully" Apr 14 01:09:01.303795 containerd[1483]: time="2026-04-14T01:09:01.303156881Z" level=info msg="StopPodSandbox for \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\"" Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.403 [WARNING][5263] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0", GenerateName:"calico-apiserver-6b98884ffd-", Namespace:"calico-system", SelfLink:"", UID:"16f4d282-7525-45fa-9798-7274ea91b7f6", ResourceVersion:"1302", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b98884ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f", Pod:"calico-apiserver-6b98884ffd-t64fc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie64108eeb08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.404 [INFO][5263] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.404 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" iface="eth0" netns="" Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.405 [INFO][5263] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.405 [INFO][5263] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.552 [INFO][5272] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" HandleID="k8s-pod-network.f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.552 [INFO][5272] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.553 [INFO][5272] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.568 [WARNING][5272] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" HandleID="k8s-pod-network.f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.568 [INFO][5272] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" HandleID="k8s-pod-network.f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.573 [INFO][5272] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:01.591171 containerd[1483]: 2026-04-14 01:09:01.578 [INFO][5263] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:09:01.591171 containerd[1483]: time="2026-04-14T01:09:01.591019863Z" level=info msg="TearDown network for sandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\" successfully" Apr 14 01:09:01.591171 containerd[1483]: time="2026-04-14T01:09:01.591233866Z" level=info msg="StopPodSandbox for \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\" returns successfully" Apr 14 01:09:01.593286 containerd[1483]: time="2026-04-14T01:09:01.592675167Z" level=info msg="RemovePodSandbox for \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\"" Apr 14 01:09:01.593286 containerd[1483]: time="2026-04-14T01:09:01.592707239Z" level=info msg="Forcibly stopping sandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\"" Apr 14 01:09:01.684200 containerd[1483]: time="2026-04-14T01:09:01.682222905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:01.685052 containerd[1483]: time="2026-04-14T01:09:01.684990232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 14 01:09:01.689349 containerd[1483]: time="2026-04-14T01:09:01.689267310Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:01.693199 containerd[1483]: time="2026-04-14T01:09:01.693041070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:01.695010 containerd[1483]: time="2026-04-14T01:09:01.694983697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.189623187s" Apr 14 01:09:01.695138 containerd[1483]: time="2026-04-14T01:09:01.695126667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 14 01:09:01.697134 containerd[1483]: time="2026-04-14T01:09:01.697115967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 14 01:09:01.705697 containerd[1483]: time="2026-04-14T01:09:01.705643181Z" level=info msg="CreateContainer within sandbox \"e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 14 01:09:01.734902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1786229582.mount: Deactivated successfully. Apr 14 01:09:01.735382 containerd[1483]: time="2026-04-14T01:09:01.735316059Z" level=info msg="CreateContainer within sandbox \"e4bf09037bc4defa5cf07cb8863ec0b46760841740b0fe7038823148523486dc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d4af3cb9bb449db72d84124b970176c1bd3b4abbe190decd43c546958e68e176\"" Apr 14 01:09:01.736436 containerd[1483]: time="2026-04-14T01:09:01.736370585Z" level=info msg="StartContainer for \"d4af3cb9bb449db72d84124b970176c1bd3b4abbe190decd43c546958e68e176\"" Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.710 [WARNING][5290] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0", GenerateName:"calico-apiserver-6b98884ffd-", Namespace:"calico-system", SelfLink:"", UID:"16f4d282-7525-45fa-9798-7274ea91b7f6", ResourceVersion:"1302", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b98884ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60e658359f52ba868bf18728c6ded728e8491204f33131d1611d1f6fcbc763f", Pod:"calico-apiserver-6b98884ffd-t64fc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie64108eeb08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.711 [INFO][5290] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.711 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" iface="eth0" netns="" Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.711 [INFO][5290] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.711 [INFO][5290] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.766 [INFO][5304] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" HandleID="k8s-pod-network.f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.767 [INFO][5304] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.767 [INFO][5304] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.780 [WARNING][5304] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" HandleID="k8s-pod-network.f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.780 [INFO][5304] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" HandleID="k8s-pod-network.f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Workload="localhost-k8s-calico--apiserver--6b98884ffd--t64fc-eth0" Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.786 [INFO][5304] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:01.795660 containerd[1483]: 2026-04-14 01:09:01.793 [INFO][5290] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081" Apr 14 01:09:01.795660 containerd[1483]: time="2026-04-14T01:09:01.795552717Z" level=info msg="TearDown network for sandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\" successfully" Apr 14 01:09:01.801298 containerd[1483]: time="2026-04-14T01:09:01.801250636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:09:01.801390 containerd[1483]: time="2026-04-14T01:09:01.801321785Z" level=info msg="RemovePodSandbox \"f9cc2dda918cd211df160281a3aeb8e79a1382a51758df3b0cbfd156e3b3f081\" returns successfully" Apr 14 01:09:01.829813 systemd[1]: Started cri-containerd-d4af3cb9bb449db72d84124b970176c1bd3b4abbe190decd43c546958e68e176.scope - libcontainer container d4af3cb9bb449db72d84124b970176c1bd3b4abbe190decd43c546958e68e176. Apr 14 01:09:01.932787 containerd[1483]: time="2026-04-14T01:09:01.931806793Z" level=info msg="StartContainer for \"d4af3cb9bb449db72d84124b970176c1bd3b4abbe190decd43c546958e68e176\" returns successfully" Apr 14 01:09:02.446557 kubelet[2757]: I0414 01:09:02.446263 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-nf2k5" podStartSLOduration=55.595295224 podStartE2EDuration="1m1.446238135s" podCreationTimestamp="2026-04-14 01:08:01 +0000 UTC" firstStartedPulling="2026-04-14 01:08:55.845748849 +0000 UTC m=+117.019862759" lastFinishedPulling="2026-04-14 01:09:01.69669176 +0000 UTC m=+122.870805670" observedRunningTime="2026-04-14 01:09:02.445985168 +0000 UTC m=+123.620099094" watchObservedRunningTime="2026-04-14 01:09:02.446238135 +0000 UTC m=+123.620352064" Apr 14 01:09:04.255076 containerd[1483]: time="2026-04-14T01:09:04.254392455Z" level=info msg="StopPodSandbox for \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\"" Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.677 [INFO][5426] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.689 [INFO][5426] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" iface="eth0" netns="/var/run/netns/cni-1b306d4a-ea25-7a47-d3e1-c949151616b1" Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.690 [INFO][5426] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" iface="eth0" netns="/var/run/netns/cni-1b306d4a-ea25-7a47-d3e1-c949151616b1" Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.690 [INFO][5426] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" iface="eth0" netns="/var/run/netns/cni-1b306d4a-ea25-7a47-d3e1-c949151616b1" Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.690 [INFO][5426] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.690 [INFO][5426] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.879 [INFO][5438] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" HandleID="k8s-pod-network.17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" Workload="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.886 [INFO][5438] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.887 [INFO][5438] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.967 [WARNING][5438] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" HandleID="k8s-pod-network.17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" Workload="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.967 [INFO][5438] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" HandleID="k8s-pod-network.17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" Workload="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:04.996 [INFO][5438] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:05.042047 containerd[1483]: 2026-04-14 01:09:05.027 [INFO][5426] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf" Apr 14 01:09:05.071728 containerd[1483]: time="2026-04-14T01:09:05.067374202Z" level=info msg="TearDown network for sandbox \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\" successfully" Apr 14 01:09:05.071728 containerd[1483]: time="2026-04-14T01:09:05.067679946Z" level=info msg="StopPodSandbox for \"17227232e7e57da6f524e1a1077c9ca86e0ded7fa7258b7e1a60d751e44613cf\" returns successfully" Apr 14 01:09:05.097867 containerd[1483]: time="2026-04-14T01:09:05.087397984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98884ffd-5drbf,Uid:597ea105-c0f3-48ad-ac8b-c14e65afc502,Namespace:calico-system,Attempt:1,}" Apr 14 01:09:05.166123 systemd[1]: run-netns-cni\x2d1b306d4a\x2dea25\x2d7a47\x2dd3e1\x2dc949151616b1.mount: Deactivated successfully. Apr 14 01:09:05.247041 containerd[1483]: time="2026-04-14T01:09:05.246490287Z" level=info msg="StopPodSandbox for \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\"" Apr 14 01:09:05.289317 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:52866.service - OpenSSH per-connection server daemon (10.0.0.1:52866). Apr 14 01:09:05.833364 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 52866 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:09:05.886673 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:09:05.935837 systemd-logind[1472]: New session 16 of user core. Apr 14 01:09:05.955435 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 01:09:06.538635 containerd[1483]: time="2026-04-14T01:09:06.536221090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:06.539269 containerd[1483]: time="2026-04-14T01:09:06.539217969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 14 01:09:06.554165 containerd[1483]: time="2026-04-14T01:09:06.553993644Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:06.681692 containerd[1483]: time="2026-04-14T01:09:06.678345979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:06.681692 containerd[1483]: time="2026-04-14T01:09:06.679263410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 4.981756977s" Apr 14 01:09:06.681692 containerd[1483]: time="2026-04-14T01:09:06.679329900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:05.900 [INFO][5472] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:05.910 [INFO][5472] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" iface="eth0" netns="/var/run/netns/cni-71a6b54c-a419-7802-1a4c-1043bd046d3c" Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:05.913 [INFO][5472] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" iface="eth0" netns="/var/run/netns/cni-71a6b54c-a419-7802-1a4c-1043bd046d3c" Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:05.914 [INFO][5472] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" iface="eth0" netns="/var/run/netns/cni-71a6b54c-a419-7802-1a4c-1043bd046d3c" Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:05.914 [INFO][5472] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:05.914 [INFO][5472] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:06.410 [INFO][5485] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" HandleID="k8s-pod-network.130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" Workload="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:06.421 [INFO][5485] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:06.422 [INFO][5485] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:06.477 [WARNING][5485] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" HandleID="k8s-pod-network.130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" Workload="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:06.477 [INFO][5485] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" HandleID="k8s-pod-network.130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" Workload="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:06.539 [INFO][5485] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:06.682169 containerd[1483]: 2026-04-14 01:09:06.623 [INFO][5472] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86" Apr 14 01:09:06.686754 containerd[1483]: time="2026-04-14T01:09:06.686705734Z" level=info msg="TearDown network for sandbox \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\" successfully" Apr 14 01:09:06.686919 containerd[1483]: time="2026-04-14T01:09:06.686898619Z" level=info msg="StopPodSandbox for \"130434aeaeb8c3ea3c2d87eb984bb895d6003f38d294f53dcbd9e4def01a1b86\" returns successfully" Apr 14 01:09:06.687616 kubelet[2757]: E0414 01:09:06.687587 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:06.692918 containerd[1483]: time="2026-04-14T01:09:06.692878460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 14 01:09:06.695474 systemd[1]: run-netns-cni\x2d71a6b54c\x2da419\x2d7802\x2d1a4c\x2d1043bd046d3c.mount: Deactivated successfully. Apr 14 01:09:06.724079 containerd[1483]: time="2026-04-14T01:09:06.693922793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bdzqh,Uid:03832b61-493a-446a-b13d-8da3b79bf6be,Namespace:kube-system,Attempt:1,}" Apr 14 01:09:06.868111 containerd[1483]: time="2026-04-14T01:09:06.867902658Z" level=info msg="CreateContainer within sandbox \"3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 14 01:09:07.410839 containerd[1483]: time="2026-04-14T01:09:07.409643336Z" level=info msg="CreateContainer within sandbox \"3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9599128dd2eb6dddf315044df6a94f6268078186544691e386a2054be771cff1\"" Apr 14 01:09:07.412686 containerd[1483]: time="2026-04-14T01:09:07.411721236Z" level=info msg="StartContainer for \"9599128dd2eb6dddf315044df6a94f6268078186544691e386a2054be771cff1\"" Apr 14 01:09:07.534980 sshd[5455]: pam_unix(sshd:session): session closed for user core Apr 14 01:09:07.625517 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:52866.service: Deactivated successfully. Apr 14 01:09:07.650367 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 01:09:07.675006 systemd-networkd[1376]: calie6f6ba81538: Link UP Apr 14 01:09:07.680374 systemd-networkd[1376]: calie6f6ba81538: Gained carrier Apr 14 01:09:07.681477 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Apr 14 01:09:07.695106 systemd-logind[1472]: Removed session 16. Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:05.946 [INFO][5448] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0 calico-apiserver-6b98884ffd- calico-system 597ea105-c0f3-48ad-ac8b-c14e65afc502 1331 0 2026-04-14 01:08:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b98884ffd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b98884ffd-5drbf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie6f6ba81538 [] [] }} ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-5drbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:06.013 [INFO][5448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-5drbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:06.690 [INFO][5493] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" HandleID="k8s-pod-network.ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Workload="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:06.871 [INFO][5493] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" HandleID="k8s-pod-network.ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Workload="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004cedd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6b98884ffd-5drbf", "timestamp":"2026-04-14 01:09:06.690860571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe2c0)} Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:06.876 [INFO][5493] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:06.999 [INFO][5493] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.019 [INFO][5493] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.052 [INFO][5493] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" host="localhost" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.248 [INFO][5493] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.355 [INFO][5493] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.386 [INFO][5493] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.442 [INFO][5493] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.442 [INFO][5493] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" host="localhost" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.465 [INFO][5493] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638 Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.496 [INFO][5493] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" host="localhost" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.569 [INFO][5493] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" host="localhost" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.569 [INFO][5493] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" host="localhost" Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.570 [INFO][5493] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:07.817704 containerd[1483]: 2026-04-14 01:09:07.570 [INFO][5493] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" HandleID="k8s-pod-network.ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Workload="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" Apr 14 01:09:07.818880 containerd[1483]: 2026-04-14 01:09:07.645 [INFO][5448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-5drbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0", GenerateName:"calico-apiserver-6b98884ffd-", Namespace:"calico-system", SelfLink:"", UID:"597ea105-c0f3-48ad-ac8b-c14e65afc502", ResourceVersion:"1331", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b98884ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b98884ffd-5drbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie6f6ba81538", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:07.818880 containerd[1483]: 2026-04-14 01:09:07.648 [INFO][5448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-5drbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" Apr 14 01:09:07.818880 containerd[1483]: 2026-04-14 01:09:07.648 [INFO][5448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6f6ba81538 ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-5drbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" Apr 14 01:09:07.818880 containerd[1483]: 2026-04-14 01:09:07.684 [INFO][5448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-5drbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" Apr 14 01:09:07.818880 containerd[1483]: 2026-04-14 01:09:07.684 [INFO][5448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-5drbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0", GenerateName:"calico-apiserver-6b98884ffd-", Namespace:"calico-system", SelfLink:"", UID:"597ea105-c0f3-48ad-ac8b-c14e65afc502", ResourceVersion:"1331", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b98884ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638", Pod:"calico-apiserver-6b98884ffd-5drbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie6f6ba81538", MAC:"da:ed:f8:1b:d7:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:07.818880 containerd[1483]: 2026-04-14 01:09:07.761 [INFO][5448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638" Namespace="calico-system" Pod="calico-apiserver-6b98884ffd-5drbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b98884ffd--5drbf-eth0" Apr 14 01:09:07.990798 systemd[1]: Started cri-containerd-9599128dd2eb6dddf315044df6a94f6268078186544691e386a2054be771cff1.scope - libcontainer container 9599128dd2eb6dddf315044df6a94f6268078186544691e386a2054be771cff1. Apr 14 01:09:08.096821 containerd[1483]: time="2026-04-14T01:09:08.096451334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:09:08.097182 containerd[1483]: time="2026-04-14T01:09:08.096749925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:09:08.097182 containerd[1483]: time="2026-04-14T01:09:08.096985200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:09:08.109982 containerd[1483]: time="2026-04-14T01:09:08.107353192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:09:08.174569 systemd-networkd[1376]: calic78421ce2ce: Link UP Apr 14 01:09:08.175332 systemd-networkd[1376]: calic78421ce2ce: Gained carrier Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.508 [INFO][5511] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0 coredns-674b8bbfcf- kube-system 03832b61-493a-446a-b13d-8da3b79bf6be 1338 0 2026-04-14 01:06:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bdzqh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic78421ce2ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Namespace="kube-system" Pod="coredns-674b8bbfcf-bdzqh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bdzqh-" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.508 [INFO][5511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Namespace="kube-system" Pod="coredns-674b8bbfcf-bdzqh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.762 [INFO][5534] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" HandleID="k8s-pod-network.57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Workload="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.828 [INFO][5534] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" HandleID="k8s-pod-network.57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Workload="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a4120), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bdzqh", "timestamp":"2026-04-14 01:09:07.762645212 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017e6e0)} Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.828 [INFO][5534] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.828 [INFO][5534] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.828 [INFO][5534] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.856 [INFO][5534] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" host="localhost" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.915 [INFO][5534] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.982 [INFO][5534] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:07.999 [INFO][5534] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:08.027 [INFO][5534] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:08.030 [INFO][5534] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" host="localhost" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:08.035 [INFO][5534] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127 Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:08.079 [INFO][5534] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" host="localhost" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:08.153 [INFO][5534] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" host="localhost" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:08.153 [INFO][5534] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" host="localhost" Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:08.153 [INFO][5534] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:08.264828 containerd[1483]: 2026-04-14 01:09:08.153 [INFO][5534] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" HandleID="k8s-pod-network.57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Workload="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" Apr 14 01:09:08.262376 systemd[1]: Started cri-containerd-ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638.scope - libcontainer container ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638. Apr 14 01:09:08.269828 containerd[1483]: 2026-04-14 01:09:08.163 [INFO][5511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Namespace="kube-system" Pod="coredns-674b8bbfcf-bdzqh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"03832b61-493a-446a-b13d-8da3b79bf6be", ResourceVersion:"1338", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bdzqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic78421ce2ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:08.269828 containerd[1483]: 2026-04-14 01:09:08.163 [INFO][5511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Namespace="kube-system" Pod="coredns-674b8bbfcf-bdzqh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" Apr 14 01:09:08.269828 containerd[1483]: 2026-04-14 01:09:08.164 [INFO][5511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic78421ce2ce ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Namespace="kube-system" Pod="coredns-674b8bbfcf-bdzqh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" Apr 14 01:09:08.269828 containerd[1483]: 2026-04-14 01:09:08.167 [INFO][5511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Namespace="kube-system" Pod="coredns-674b8bbfcf-bdzqh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" Apr 14 01:09:08.269828 containerd[1483]: 2026-04-14 01:09:08.169 [INFO][5511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Namespace="kube-system" Pod="coredns-674b8bbfcf-bdzqh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"03832b61-493a-446a-b13d-8da3b79bf6be", ResourceVersion:"1338", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127", Pod:"coredns-674b8bbfcf-bdzqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic78421ce2ce", MAC:"92:8e:3f:10:32:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:08.269828 containerd[1483]: 2026-04-14 01:09:08.251 [INFO][5511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127" Namespace="kube-system" Pod="coredns-674b8bbfcf-bdzqh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bdzqh-eth0" Apr 14 01:09:08.285437 containerd[1483]: time="2026-04-14T01:09:08.284140953Z" level=info msg="StartContainer for \"9599128dd2eb6dddf315044df6a94f6268078186544691e386a2054be771cff1\" returns successfully" Apr 14 01:09:08.375675 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:09:08.405127 containerd[1483]: time="2026-04-14T01:09:08.404665623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:09:08.405127 containerd[1483]: time="2026-04-14T01:09:08.404778436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:09:08.405127 containerd[1483]: time="2026-04-14T01:09:08.404795679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:09:08.405127 containerd[1483]: time="2026-04-14T01:09:08.404911490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:09:08.518903 containerd[1483]: time="2026-04-14T01:09:08.517504030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98884ffd-5drbf,Uid:597ea105-c0f3-48ad-ac8b-c14e65afc502,Namespace:calico-system,Attempt:1,} returns sandbox id \"ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638\"" Apr 14 01:09:08.584792 systemd[1]: Started cri-containerd-57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127.scope - libcontainer container 57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127. Apr 14 01:09:08.626661 containerd[1483]: time="2026-04-14T01:09:08.604453436Z" level=info msg="CreateContainer within sandbox \"ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 14 01:09:08.759598 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:09:08.837868 containerd[1483]: time="2026-04-14T01:09:08.835726248Z" level=info msg="CreateContainer within sandbox \"ac781b5a49502294926d2b4c1207672c37ddac9d4754b74c7f8bbbc917bb8638\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8e8dd58d0a4d7f46e49dd3cfb0bc9c7d024a5f276d368cc7941b29315fafc18f\"" Apr 14 01:09:08.856191 containerd[1483]: time="2026-04-14T01:09:08.855571978Z" level=info msg="StartContainer for \"8e8dd58d0a4d7f46e49dd3cfb0bc9c7d024a5f276d368cc7941b29315fafc18f\"" Apr 14 01:09:08.881154 containerd[1483]: time="2026-04-14T01:09:08.874979893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bdzqh,Uid:03832b61-493a-446a-b13d-8da3b79bf6be,Namespace:kube-system,Attempt:1,} returns sandbox id \"57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127\"" Apr 14 01:09:08.881304 kubelet[2757]: E0414 01:09:08.876382 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:08.923014 containerd[1483]: time="2026-04-14T01:09:08.920914965Z" level=info msg="CreateContainer within sandbox \"57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 01:09:09.006015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749524865.mount: Deactivated successfully. Apr 14 01:09:09.022157 containerd[1483]: time="2026-04-14T01:09:09.022106825Z" level=info msg="CreateContainer within sandbox \"57ff17edb1f0a3f208002d8f0cc44988c290aa5bc8bb55676acdc3fb7ea2e127\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0cc94b5f3236b9c361351d60255b3a462f4d78bb98f29f0ca03763b755aa8c86\"" Apr 14 01:09:09.023739 containerd[1483]: time="2026-04-14T01:09:09.023653508Z" level=info msg="StartContainer for \"0cc94b5f3236b9c361351d60255b3a462f4d78bb98f29f0ca03763b755aa8c86\"" Apr 14 01:09:09.043474 systemd[1]: Started cri-containerd-8e8dd58d0a4d7f46e49dd3cfb0bc9c7d024a5f276d368cc7941b29315fafc18f.scope - libcontainer container 8e8dd58d0a4d7f46e49dd3cfb0bc9c7d024a5f276d368cc7941b29315fafc18f. Apr 14 01:09:09.115262 systemd-networkd[1376]: calie6f6ba81538: Gained IPv6LL Apr 14 01:09:09.173345 systemd[1]: Started cri-containerd-0cc94b5f3236b9c361351d60255b3a462f4d78bb98f29f0ca03763b755aa8c86.scope - libcontainer container 0cc94b5f3236b9c361351d60255b3a462f4d78bb98f29f0ca03763b755aa8c86. Apr 14 01:09:09.221261 containerd[1483]: time="2026-04-14T01:09:09.220662590Z" level=info msg="StopPodSandbox for \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\"" Apr 14 01:09:09.318634 containerd[1483]: time="2026-04-14T01:09:09.316496857Z" level=info msg="StartContainer for \"8e8dd58d0a4d7f46e49dd3cfb0bc9c7d024a5f276d368cc7941b29315fafc18f\" returns successfully" Apr 14 01:09:09.404738 containerd[1483]: time="2026-04-14T01:09:09.403290276Z" level=info msg="StartContainer for \"0cc94b5f3236b9c361351d60255b3a462f4d78bb98f29f0ca03763b755aa8c86\" returns successfully" Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.556 [INFO][5770] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.557 [INFO][5770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" iface="eth0" netns="/var/run/netns/cni-d078d960-3906-e551-0419-ab2ae338296f" Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.558 [INFO][5770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" iface="eth0" netns="/var/run/netns/cni-d078d960-3906-e551-0419-ab2ae338296f" Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.558 [INFO][5770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" iface="eth0" netns="/var/run/netns/cni-d078d960-3906-e551-0419-ab2ae338296f" Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.560 [INFO][5770] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.560 [INFO][5770] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.626 [INFO][5799] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" HandleID="k8s-pod-network.c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" Workload="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.627 [INFO][5799] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.627 [INFO][5799] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.654 [WARNING][5799] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" HandleID="k8s-pod-network.c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" Workload="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.654 [INFO][5799] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" HandleID="k8s-pod-network.c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" Workload="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.672 [INFO][5799] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:09.693222 containerd[1483]: 2026-04-14 01:09:09.679 [INFO][5770] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2" Apr 14 01:09:09.693222 containerd[1483]: time="2026-04-14T01:09:09.686641739Z" level=info msg="TearDown network for sandbox \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\" successfully" Apr 14 01:09:09.693222 containerd[1483]: time="2026-04-14T01:09:09.686680247Z" level=info msg="StopPodSandbox for \"c2f79b68061fd81e011ebc1b5718ea757706def049635abe5d276c15466f44a2\" returns successfully" Apr 14 01:09:09.693222 containerd[1483]: time="2026-04-14T01:09:09.689081173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qtnrp,Uid:6c8f5369-6e4c-4879-910d-f06910f3f96b,Namespace:kube-system,Attempt:1,}" Apr 14 01:09:09.699509 kubelet[2757]: E0414 01:09:09.687000 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:09.751186 kubelet[2757]: E0414 01:09:09.751102 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:09.873720 kubelet[2757]: I0414 01:09:09.873425 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b98884ffd-5drbf" podStartSLOduration=69.873398501 podStartE2EDuration="1m9.873398501s" podCreationTimestamp="2026-04-14 01:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:09:09.839838144 +0000 UTC m=+131.013952067" watchObservedRunningTime="2026-04-14 01:09:09.873398501 +0000 UTC m=+131.047512427" Apr 14 01:09:09.948841 kubelet[2757]: I0414 01:09:09.948614 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bdzqh" podStartSLOduration=130.948590827 podStartE2EDuration="2m10.948590827s" podCreationTimestamp="2026-04-14 01:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:09:09.946505162 +0000 UTC m=+131.120619083" watchObservedRunningTime="2026-04-14 01:09:09.948590827 +0000 UTC m=+131.122704756" Apr 14 01:09:09.977538 systemd[1]: run-netns-cni\x2dd078d960\x2d3906\x2de551\x2d0419\x2dab2ae338296f.mount: Deactivated successfully. Apr 14 01:09:10.141604 systemd-networkd[1376]: calic78421ce2ce: Gained IPv6LL Apr 14 01:09:10.545735 systemd-networkd[1376]: calid92f64bdc01: Link UP Apr 14 01:09:10.546409 systemd-networkd[1376]: calid92f64bdc01: Gained carrier Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.070 [INFO][5810] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0 coredns-674b8bbfcf- kube-system 6c8f5369-6e4c-4879-910d-f06910f3f96b 1372 0 2026-04-14 01:06:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qtnrp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid92f64bdc01 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qtnrp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qtnrp-" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.071 [INFO][5810] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qtnrp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.256 [INFO][5831] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" HandleID="k8s-pod-network.a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Workload="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.299 [INFO][5831] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" HandleID="k8s-pod-network.a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Workload="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005961f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qtnrp", "timestamp":"2026-04-14 01:09:10.256859255 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f8000)} Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.299 [INFO][5831] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.299 [INFO][5831] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.300 [INFO][5831] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.326 [INFO][5831] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" host="localhost" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.357 [INFO][5831] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.395 [INFO][5831] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.408 [INFO][5831] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.414 [INFO][5831] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.415 [INFO][5831] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" host="localhost" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.427 [INFO][5831] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4 Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.462 [INFO][5831] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" host="localhost" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.504 [INFO][5831] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" host="localhost" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.514 [INFO][5831] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" host="localhost" Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.514 [INFO][5831] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:09:10.653957 containerd[1483]: 2026-04-14 01:09:10.514 [INFO][5831] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" HandleID="k8s-pod-network.a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Workload="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" Apr 14 01:09:10.656550 containerd[1483]: 2026-04-14 01:09:10.525 [INFO][5810] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qtnrp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6c8f5369-6e4c-4879-910d-f06910f3f96b", ResourceVersion:"1372", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qtnrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid92f64bdc01", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:10.656550 containerd[1483]: 2026-04-14 01:09:10.526 [INFO][5810] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qtnrp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" Apr 14 01:09:10.656550 containerd[1483]: 2026-04-14 01:09:10.526 [INFO][5810] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid92f64bdc01 ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qtnrp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" Apr 14 01:09:10.656550 containerd[1483]: 2026-04-14 01:09:10.532 [INFO][5810] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qtnrp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" Apr 14 01:09:10.656550 containerd[1483]: 2026-04-14 01:09:10.556 [INFO][5810] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qtnrp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6c8f5369-6e4c-4879-910d-f06910f3f96b", ResourceVersion:"1372", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4", Pod:"coredns-674b8bbfcf-qtnrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid92f64bdc01", MAC:"ba:2b:8f:69:4b:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:09:10.656550 containerd[1483]: 2026-04-14 01:09:10.620 [INFO][5810] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qtnrp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qtnrp-eth0" Apr 14 01:09:10.785845 kubelet[2757]: E0414 01:09:10.775257 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:10.812531 containerd[1483]: time="2026-04-14T01:09:10.810556914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:09:10.840307 containerd[1483]: time="2026-04-14T01:09:10.813054334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:09:10.840307 containerd[1483]: time="2026-04-14T01:09:10.837140601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:09:10.858013 containerd[1483]: time="2026-04-14T01:09:10.857826506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:09:10.994026 systemd[1]: run-containerd-runc-k8s.io-a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4-runc.Bz7gxs.mount: Deactivated successfully. Apr 14 01:09:11.071074 systemd[1]: Started cri-containerd-a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4.scope - libcontainer container a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4. Apr 14 01:09:11.262749 systemd-resolved[1377]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:09:11.467288 containerd[1483]: time="2026-04-14T01:09:11.467105914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qtnrp,Uid:6c8f5369-6e4c-4879-910d-f06910f3f96b,Namespace:kube-system,Attempt:1,} returns sandbox id \"a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4\"" Apr 14 01:09:11.470280 kubelet[2757]: E0414 01:09:11.470109 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:11.569213 containerd[1483]: time="2026-04-14T01:09:11.569064499Z" level=info msg="CreateContainer within sandbox \"a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 01:09:11.733114 systemd-networkd[1376]: calid92f64bdc01: Gained IPv6LL Apr 14 01:09:11.836052 kubelet[2757]: E0414 01:09:11.823218 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:11.859669 kubelet[2757]: I0414 01:09:11.836399 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 01:09:11.896808 containerd[1483]: time="2026-04-14T01:09:11.892353328Z" level=info msg="CreateContainer within sandbox \"a1c28fbc0a12cfdd2b7cfd027d3f68e73fb3e485ddcc7832e4abfd1c7fbfefa4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6e0fd28b9bc669468c26a16be487b9686ca95a47c695155f9be9f8bbfa5ec53\"" Apr 14 01:09:11.952424 containerd[1483]: time="2026-04-14T01:09:11.952374193Z" level=info msg="StartContainer for \"b6e0fd28b9bc669468c26a16be487b9686ca95a47c695155f9be9f8bbfa5ec53\"" Apr 14 01:09:12.113083 systemd[1]: Started cri-containerd-b6e0fd28b9bc669468c26a16be487b9686ca95a47c695155f9be9f8bbfa5ec53.scope - libcontainer container b6e0fd28b9bc669468c26a16be487b9686ca95a47c695155f9be9f8bbfa5ec53. Apr 14 01:09:12.501743 containerd[1483]: time="2026-04-14T01:09:12.501201719Z" level=info msg="StartContainer for \"b6e0fd28b9bc669468c26a16be487b9686ca95a47c695155f9be9f8bbfa5ec53\" returns successfully" Apr 14 01:09:12.639373 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:48404.service - OpenSSH per-connection server daemon (10.0.0.1:48404). Apr 14 01:09:12.919991 kubelet[2757]: E0414 01:09:12.917212 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:13.100790 kubelet[2757]: I0414 01:09:13.100216 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qtnrp" podStartSLOduration=134.100190581 podStartE2EDuration="2m14.100190581s" podCreationTimestamp="2026-04-14 01:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:09:13.091510794 +0000 UTC m=+134.265624712" watchObservedRunningTime="2026-04-14 01:09:13.100190581 +0000 UTC m=+134.274304508" Apr 14 01:09:13.134769 sshd[5954]: Accepted publickey for core from 10.0.0.1 port 48404 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:09:13.157230 sshd[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:09:13.215838 systemd-logind[1472]: New session 17 of user core. Apr 14 01:09:13.241264 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 01:09:13.941520 kubelet[2757]: E0414 01:09:13.937827 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:14.694502 sshd[5954]: pam_unix(sshd:session): session closed for user core Apr 14 01:09:14.725972 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:48412.service - OpenSSH per-connection server daemon (10.0.0.1:48412). Apr 14 01:09:14.731366 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:48404.service: Deactivated successfully. Apr 14 01:09:14.738168 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 01:09:14.744422 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Apr 14 01:09:14.754538 systemd-logind[1472]: Removed session 17. Apr 14 01:09:14.910326 sshd[5976]: Accepted publickey for core from 10.0.0.1 port 48412 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:09:14.915164 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:09:14.959978 kubelet[2757]: E0414 01:09:14.952535 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:14.973411 systemd-logind[1472]: New session 18 of user core. Apr 14 01:09:15.027864 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 01:09:16.392712 sshd[5976]: pam_unix(sshd:session): session closed for user core Apr 14 01:09:16.468040 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:53110.service - OpenSSH per-connection server daemon (10.0.0.1:53110). Apr 14 01:09:16.481194 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Apr 14 01:09:16.488398 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:48412.service: Deactivated successfully. Apr 14 01:09:16.493399 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 01:09:16.507651 systemd-logind[1472]: Removed session 18. Apr 14 01:09:16.660793 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 53110 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:09:16.664903 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:09:16.684440 systemd-logind[1472]: New session 19 of user core. Apr 14 01:09:16.705245 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 01:09:17.373855 containerd[1483]: time="2026-04-14T01:09:17.373361083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:17.381109 containerd[1483]: time="2026-04-14T01:09:17.380394574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 14 01:09:17.385474 containerd[1483]: time="2026-04-14T01:09:17.385418784Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:17.396902 containerd[1483]: time="2026-04-14T01:09:17.396612042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:17.403972 containerd[1483]: time="2026-04-14T01:09:17.402968090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 10.703497885s" Apr 14 01:09:17.403972 containerd[1483]: time="2026-04-14T01:09:17.403031763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 14 01:09:17.417340 containerd[1483]: time="2026-04-14T01:09:17.417262106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 14 01:09:17.623495 containerd[1483]: time="2026-04-14T01:09:17.623193301Z" level=info msg="CreateContainer within sandbox \"d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 14 01:09:17.668540 containerd[1483]: time="2026-04-14T01:09:17.667742590Z" level=info msg="CreateContainer within sandbox \"d3f23741c8bf0dfff428fe3e0a4b37a5facc87afe5811b94b792867891acd10b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b5eebf9d066e1d12c0239b0f52d0f66b9559581bc3b81e2be66a944d78029125\"" Apr 14 01:09:17.670326 containerd[1483]: time="2026-04-14T01:09:17.669725278Z" level=info msg="StartContainer for \"b5eebf9d066e1d12c0239b0f52d0f66b9559581bc3b81e2be66a944d78029125\"" Apr 14 01:09:17.844295 systemd[1]: Started cri-containerd-b5eebf9d066e1d12c0239b0f52d0f66b9559581bc3b81e2be66a944d78029125.scope - libcontainer container b5eebf9d066e1d12c0239b0f52d0f66b9559581bc3b81e2be66a944d78029125. Apr 14 01:09:18.359143 containerd[1483]: time="2026-04-14T01:09:18.357060153Z" level=info msg="StartContainer for \"b5eebf9d066e1d12c0239b0f52d0f66b9559581bc3b81e2be66a944d78029125\" returns successfully" Apr 14 01:09:19.106354 sshd[5996]: pam_unix(sshd:session): session closed for user core Apr 14 01:09:19.238889 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:53122.service - OpenSSH per-connection server daemon (10.0.0.1:53122). Apr 14 01:09:19.239767 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:53110.service: Deactivated successfully. Apr 14 01:09:19.255981 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 01:09:19.303624 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Apr 14 01:09:19.320850 systemd-logind[1472]: Removed session 19. Apr 14 01:09:19.810747 sshd[6089]: Accepted publickey for core from 10.0.0.1 port 53122 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:09:19.813500 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:09:19.931555 systemd-logind[1472]: New session 20 of user core. Apr 14 01:09:19.959683 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 01:09:20.241794 kubelet[2757]: E0414 01:09:20.241659 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:20.759753 kubelet[2757]: I0414 01:09:20.757996 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c8dbf654f-qm855" podStartSLOduration=55.626220732 podStartE2EDuration="1m16.757967628s" podCreationTimestamp="2026-04-14 01:08:04 +0000 UTC" firstStartedPulling="2026-04-14 01:08:56.282318019 +0000 UTC m=+117.456431937" lastFinishedPulling="2026-04-14 01:09:17.414064904 +0000 UTC m=+138.588178833" observedRunningTime="2026-04-14 01:09:19.432132516 +0000 UTC m=+140.606246440" watchObservedRunningTime="2026-04-14 01:09:20.757967628 +0000 UTC m=+141.932081557" Apr 14 01:09:22.230585 containerd[1483]: time="2026-04-14T01:09:22.224592075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:22.256498 containerd[1483]: time="2026-04-14T01:09:22.237851321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 14 01:09:22.286229 containerd[1483]: time="2026-04-14T01:09:22.285901438Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:22.291300 containerd[1483]: time="2026-04-14T01:09:22.291220024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:22.295001 containerd[1483]: time="2026-04-14T01:09:22.293719565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 4.876361212s" Apr 14 01:09:22.295001 containerd[1483]: time="2026-04-14T01:09:22.294013131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 14 01:09:22.319823 containerd[1483]: time="2026-04-14T01:09:22.319406970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 14 01:09:22.407480 containerd[1483]: time="2026-04-14T01:09:22.407289625Z" level=info msg="CreateContainer within sandbox \"10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 14 01:09:22.833514 containerd[1483]: time="2026-04-14T01:09:22.829843054Z" level=info msg="CreateContainer within sandbox \"10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"55397bb2ebe277163ba995e4c6d5b45cbb90a8826d63aa651524c835c662f9ce\"" Apr 14 01:09:23.140586 containerd[1483]: time="2026-04-14T01:09:23.048795244Z" level=info msg="StartContainer for \"55397bb2ebe277163ba995e4c6d5b45cbb90a8826d63aa651524c835c662f9ce\"" Apr 14 01:09:23.364270 sshd[6089]: pam_unix(sshd:session): session closed for user core Apr 14 01:09:23.582964 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:53122.service: Deactivated successfully. Apr 14 01:09:23.605853 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 01:09:23.695262 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. Apr 14 01:09:23.721814 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:53130.service - OpenSSH per-connection server daemon (10.0.0.1:53130). Apr 14 01:09:23.723876 systemd-logind[1472]: Removed session 20. Apr 14 01:09:23.937577 systemd[1]: Started cri-containerd-55397bb2ebe277163ba995e4c6d5b45cbb90a8826d63aa651524c835c662f9ce.scope - libcontainer container 55397bb2ebe277163ba995e4c6d5b45cbb90a8826d63aa651524c835c662f9ce. Apr 14 01:09:24.222737 containerd[1483]: time="2026-04-14T01:09:24.221829176Z" level=info msg="StartContainer for \"55397bb2ebe277163ba995e4c6d5b45cbb90a8826d63aa651524c835c662f9ce\" returns successfully" Apr 14 01:09:24.226230 sshd[6168]: Accepted publickey for core from 10.0.0.1 port 53130 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:09:24.250086 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:09:24.276255 systemd-logind[1472]: New session 21 of user core. Apr 14 01:09:24.295430 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 01:09:24.542570 sshd[6168]: pam_unix(sshd:session): session closed for user core Apr 14 01:09:24.570878 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:53130.service: Deactivated successfully. Apr 14 01:09:24.574035 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 01:09:24.575001 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. Apr 14 01:09:24.576152 systemd-logind[1472]: Removed session 21. Apr 14 01:09:25.207383 containerd[1483]: time="2026-04-14T01:09:25.207133735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:25.208142 containerd[1483]: time="2026-04-14T01:09:25.207872116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 14 01:09:25.210429 containerd[1483]: time="2026-04-14T01:09:25.210304793Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:25.214675 containerd[1483]: time="2026-04-14T01:09:25.214521914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:25.217037 containerd[1483]: time="2026-04-14T01:09:25.216997217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.897525394s" Apr 14 01:09:25.217037 containerd[1483]: time="2026-04-14T01:09:25.217035277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 14 01:09:25.217978 containerd[1483]: time="2026-04-14T01:09:25.217956309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 14 01:09:25.224338 containerd[1483]: time="2026-04-14T01:09:25.224198005Z" level=info msg="CreateContainer within sandbox \"3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 14 01:09:25.253879 containerd[1483]: time="2026-04-14T01:09:25.253806683Z" level=info msg="CreateContainer within sandbox \"3cae6b662ccbf882d47680b3438796e761b978d2b8e05dcb0cfdac860d523e9f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"57c39ff48374279280f00dd209ae075d2973101c259510c1f7f5c8f1e432c86f\"" Apr 14 01:09:25.254906 containerd[1483]: time="2026-04-14T01:09:25.254873936Z" level=info msg="StartContainer for \"57c39ff48374279280f00dd209ae075d2973101c259510c1f7f5c8f1e432c86f\"" Apr 14 01:09:25.306567 systemd[1]: Started cri-containerd-57c39ff48374279280f00dd209ae075d2973101c259510c1f7f5c8f1e432c86f.scope - libcontainer container 57c39ff48374279280f00dd209ae075d2973101c259510c1f7f5c8f1e432c86f. Apr 14 01:09:25.365361 containerd[1483]: time="2026-04-14T01:09:25.365058696Z" level=info msg="StartContainer for \"57c39ff48374279280f00dd209ae075d2973101c259510c1f7f5c8f1e432c86f\" returns successfully" Apr 14 01:09:25.459826 kubelet[2757]: I0414 01:09:25.459565 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4k4bw" podStartSLOduration=52.169524761 podStartE2EDuration="1m21.459545867s" podCreationTimestamp="2026-04-14 01:08:04 +0000 UTC" firstStartedPulling="2026-04-14 01:08:55.927808016 +0000 UTC m=+117.101921929" lastFinishedPulling="2026-04-14 01:09:25.217829124 +0000 UTC m=+146.391943035" observedRunningTime="2026-04-14 01:09:25.445797409 +0000 UTC m=+146.619911345" watchObservedRunningTime="2026-04-14 01:09:25.459545867 +0000 UTC m=+146.633659795" Apr 14 01:09:26.400856 kubelet[2757]: I0414 01:09:26.400673 2757 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 14 01:09:26.409153 kubelet[2757]: I0414 01:09:26.408955 2757 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 14 01:09:27.038324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977404556.mount: Deactivated successfully. Apr 14 01:09:27.060218 containerd[1483]: time="2026-04-14T01:09:27.060165374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:27.060836 containerd[1483]: time="2026-04-14T01:09:27.060799256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 14 01:09:27.062284 containerd[1483]: time="2026-04-14T01:09:27.062249184Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:27.064533 containerd[1483]: time="2026-04-14T01:09:27.064461363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:09:27.065323 containerd[1483]: time="2026-04-14T01:09:27.065287513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.847304737s" Apr 14 01:09:27.065323 containerd[1483]: time="2026-04-14T01:09:27.065322589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 14 01:09:27.068732 containerd[1483]: time="2026-04-14T01:09:27.068692713Z" level=info msg="CreateContainer within sandbox \"10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 14 01:09:27.094873 containerd[1483]: time="2026-04-14T01:09:27.094816684Z" level=info msg="CreateContainer within sandbox \"10cb4dd9635a6af4fa0d17a54791476bc8150367911cf67d81dee115e940d98e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"45c09829b9030429acf2d7bb142e3de2c98270acbedaf23b07b07dbfaa03cad3\"" Apr 14 01:09:27.095421 containerd[1483]: time="2026-04-14T01:09:27.095399382Z" level=info msg="StartContainer for \"45c09829b9030429acf2d7bb142e3de2c98270acbedaf23b07b07dbfaa03cad3\"" Apr 14 01:09:27.138138 systemd[1]: Started cri-containerd-45c09829b9030429acf2d7bb142e3de2c98270acbedaf23b07b07dbfaa03cad3.scope - libcontainer container 45c09829b9030429acf2d7bb142e3de2c98270acbedaf23b07b07dbfaa03cad3. Apr 14 01:09:27.215540 containerd[1483]: time="2026-04-14T01:09:27.215286815Z" level=info msg="StartContainer for \"45c09829b9030429acf2d7bb142e3de2c98270acbedaf23b07b07dbfaa03cad3\" returns successfully" Apr 14 01:09:29.618096 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:36826.service - OpenSSH per-connection server daemon (10.0.0.1:36826). Apr 14 01:09:30.005538 sshd[6323]: Accepted publickey for core from 10.0.0.1 port 36826 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:09:30.044143 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:09:30.119418 systemd-logind[1472]: New session 22 of user core. Apr 14 01:09:30.142039 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 01:09:31.221257 kubelet[2757]: E0414 01:09:31.219169 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:32.114404 sshd[6323]: pam_unix(sshd:session): session closed for user core Apr 14 01:09:32.165172 systemd-logind[1472]: Session 22 logged out. Waiting for processes to exit. Apr 14 01:09:32.182520 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:36826.service: Deactivated successfully. Apr 14 01:09:32.202139 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 01:09:32.215003 systemd-logind[1472]: Removed session 22. Apr 14 01:09:34.060146 kubelet[2757]: I0414 01:09:34.059476 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-77d98cdd77-46c5g" podStartSLOduration=8.586026434 podStartE2EDuration="39.059456466s" podCreationTimestamp="2026-04-14 01:08:55 +0000 UTC" firstStartedPulling="2026-04-14 01:08:56.592902931 +0000 UTC m=+117.767016842" lastFinishedPulling="2026-04-14 01:09:27.066332957 +0000 UTC m=+148.240446874" observedRunningTime="2026-04-14 01:09:27.500952939 +0000 UTC m=+148.675066854" watchObservedRunningTime="2026-04-14 01:09:34.059456466 +0000 UTC m=+155.233570398" Apr 14 01:09:35.244209 kubelet[2757]: E0414 01:09:35.237895 2757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:09:37.155374 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:34756.service - OpenSSH per-connection server daemon (10.0.0.1:34756). Apr 14 01:09:37.278329 sshd[6373]: Accepted publickey for core from 10.0.0.1 port 34756 ssh2: RSA SHA256:wfK2jyaERm42muvw4ft1sSLNM+Ip6p3SOXL7pxOImbw Apr 14 01:09:37.284965 sshd[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:09:37.293234 systemd-logind[1472]: New session 23 of user core. Apr 14 01:09:37.312910 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 01:09:37.556253 sshd[6373]: pam_unix(sshd:session): session closed for user core Apr 14 01:09:37.563433 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:34756.service: Deactivated successfully. Apr 14 01:09:37.566138 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 01:09:37.567145 systemd-logind[1472]: Session 23 logged out. Waiting for processes to exit. Apr 14 01:09:37.568679 systemd-logind[1472]: Removed session 23.