Apr 16 01:05:16.916469 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:45:03 -00 2026 Apr 16 01:05:16.916488 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 01:05:16.916497 kernel: BIOS-provided physical RAM map: Apr 16 01:05:16.916502 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 01:05:16.916506 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 16 01:05:16.916510 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 16 01:05:16.916515 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 16 01:05:16.916520 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 16 01:05:16.916524 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 16 01:05:16.916528 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 16 01:05:16.916534 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 16 01:05:16.916538 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 16 01:05:16.916542 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 16 01:05:16.916547 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 16 01:05:16.916552 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 16 01:05:16.916557 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 16 01:05:16.916564 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 16 01:05:16.916568 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 16 01:05:16.916573 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 16 01:05:16.916577 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 01:05:16.916582 kernel: NX (Execute Disable) protection: active Apr 16 01:05:16.916586 kernel: APIC: Static calls initialized Apr 16 01:05:16.916591 kernel: efi: EFI v2.7 by EDK II Apr 16 01:05:16.916596 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 16 01:05:16.916600 kernel: SMBIOS 2.8 present. Apr 16 01:05:16.916605 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 16 01:05:16.916609 kernel: Hypervisor detected: KVM Apr 16 01:05:16.916615 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 01:05:16.916620 kernel: kvm-clock: using sched offset of 15708809892 cycles Apr 16 01:05:16.916625 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 01:05:16.916630 kernel: tsc: Detected 2793.438 MHz processor Apr 16 01:05:16.916635 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 01:05:16.916710 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 01:05:16.916716 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 16 01:05:16.916721 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 16 01:05:16.916726 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 01:05:16.916733 kernel: Using GB pages for direct mapping Apr 16 01:05:16.916738 kernel: Secure boot disabled Apr 16 01:05:16.916743 kernel: ACPI: Early table checksum verification disabled Apr 16 01:05:16.916748 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 16 01:05:16.916755 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 16 01:05:16.916760 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:05:16.916765 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:05:16.916772 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 16 01:05:16.916778 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:05:16.916783 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:05:16.916787 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:05:16.916793 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:05:16.916797 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 01:05:16.916802 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 16 01:05:16.916809 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 16 01:05:16.916814 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 16 01:05:16.916819 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 16 01:05:16.916824 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 16 01:05:16.916829 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 16 01:05:16.916834 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 16 01:05:16.916839 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 16 01:05:16.916844 kernel: No NUMA configuration found Apr 16 01:05:16.916849 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 16 01:05:16.916855 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 16 01:05:16.916860 kernel: Zone ranges: Apr 16 01:05:16.916865 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 01:05:16.916870 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 16 01:05:16.916875 kernel: Normal empty Apr 16 01:05:16.916880 kernel: Movable zone start for each node Apr 16 01:05:16.916885 kernel: Early memory node ranges Apr 16 01:05:16.916890 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 16 01:05:16.916895 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 16 01:05:16.916900 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 16 01:05:16.916906 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 16 01:05:16.916911 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 16 01:05:16.916916 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 16 01:05:16.916921 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 16 01:05:16.916926 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 01:05:16.916931 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 16 01:05:16.916935 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 16 01:05:16.916940 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 01:05:16.916945 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 16 01:05:16.916952 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 16 01:05:16.916957 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 16 01:05:16.916962 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 01:05:16.916967 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 01:05:16.916972 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 01:05:16.916977 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 01:05:16.916982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 01:05:16.916987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 01:05:16.916992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 01:05:16.916999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 01:05:16.917004 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 01:05:16.917009 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 01:05:16.917013 kernel: TSC deadline timer available Apr 16 01:05:16.917018 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 16 01:05:16.917023 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 01:05:16.917028 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 01:05:16.917033 kernel: kvm-guest: setup PV sched yield Apr 16 01:05:16.917038 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 16 01:05:16.917045 kernel: Booting paravirtualized kernel on KVM Apr 16 01:05:16.917050 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 01:05:16.917055 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 01:05:16.917060 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 16 01:05:16.917065 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 16 01:05:16.917070 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 01:05:16.917075 kernel: kvm-guest: PV spinlocks enabled Apr 16 01:05:16.917080 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 01:05:16.917086 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 01:05:16.917093 kernel: random: crng init done Apr 16 01:05:16.917098 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 01:05:16.917103 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 01:05:16.917108 kernel: Fallback order for Node 0: 0 Apr 16 01:05:16.917113 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 16 01:05:16.917118 kernel: Policy zone: DMA32 Apr 16 01:05:16.917123 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 01:05:16.917128 kernel: Memory: 2399656K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 167140K reserved, 0K cma-reserved) Apr 16 01:05:16.917134 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 01:05:16.917139 kernel: ftrace: allocating 37996 entries in 149 pages Apr 16 01:05:16.917144 kernel: ftrace: allocated 149 pages with 4 groups Apr 16 01:05:16.917149 kernel: Dynamic Preempt: voluntary Apr 16 01:05:16.917154 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 01:05:16.917166 kernel: rcu: RCU event tracing is enabled. Apr 16 01:05:16.917173 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 01:05:16.917179 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 01:05:16.917184 kernel: Rude variant of Tasks RCU enabled. Apr 16 01:05:16.917190 kernel: Tracing variant of Tasks RCU enabled. Apr 16 01:05:16.917195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 01:05:16.917201 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 01:05:16.917208 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 01:05:16.917308 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 01:05:16.917314 kernel: Console: colour dummy device 80x25 Apr 16 01:05:16.917320 kernel: printk: console [ttyS0] enabled Apr 16 01:05:16.917326 kernel: ACPI: Core revision 20230628 Apr 16 01:05:16.917334 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 01:05:16.917340 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 01:05:16.917345 kernel: x2apic enabled Apr 16 01:05:16.917350 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 01:05:16.917356 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 01:05:16.917361 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 01:05:16.917367 kernel: kvm-guest: setup PV IPIs Apr 16 01:05:16.917372 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 01:05:16.917378 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 01:05:16.917385 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 01:05:16.917391 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 01:05:16.917396 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 01:05:16.917402 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 01:05:16.917407 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 01:05:16.917413 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 01:05:16.917418 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 01:05:16.917424 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 01:05:16.917429 kernel: RETBleed: Vulnerable Apr 16 01:05:16.917437 kernel: Speculative Store Bypass: Vulnerable Apr 16 01:05:16.917442 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 01:05:16.917448 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 01:05:16.917453 kernel: active return thunk: its_return_thunk Apr 16 01:05:16.917459 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 01:05:16.917464 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 01:05:16.917470 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 01:05:16.917475 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 01:05:16.917481 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 01:05:16.917488 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 01:05:16.917493 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 01:05:16.917499 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 01:05:16.917504 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 01:05:16.917510 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 01:05:16.917516 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 01:05:16.917521 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 01:05:16.917527 kernel: Freeing SMP alternatives memory: 32K Apr 16 01:05:16.917532 kernel: pid_max: default: 32768 minimum: 301 Apr 16 01:05:16.917540 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 01:05:16.917545 kernel: landlock: Up and running. Apr 16 01:05:16.917551 kernel: SELinux: Initializing. Apr 16 01:05:16.917557 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 01:05:16.917562 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 01:05:16.917568 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 01:05:16.917574 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 01:05:16.917579 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 01:05:16.917585 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 01:05:16.917592 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 01:05:16.917597 kernel: signal: max sigframe size: 3632 Apr 16 01:05:16.917603 kernel: rcu: Hierarchical SRCU implementation. Apr 16 01:05:16.917609 kernel: rcu: Max phase no-delay instances is 400. Apr 16 01:05:16.917614 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 01:05:16.917620 kernel: smp: Bringing up secondary CPUs ... Apr 16 01:05:16.917625 kernel: smpboot: x86: Booting SMP configuration: Apr 16 01:05:16.917631 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 01:05:16.917636 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 01:05:16.917704 kernel: smpboot: Max logical packages: 1 Apr 16 01:05:16.917710 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 01:05:16.917716 kernel: devtmpfs: initialized Apr 16 01:05:16.917721 kernel: x86/mm: Memory block size: 128MB Apr 16 01:05:16.917726 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 16 01:05:16.917732 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 16 01:05:16.917738 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 16 01:05:16.917743 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 16 01:05:16.917748 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 16 01:05:16.917756 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 01:05:16.917762 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 01:05:16.917767 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 01:05:16.917773 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 01:05:16.917778 kernel: audit: initializing netlink subsys (disabled) Apr 16 01:05:16.917784 kernel: audit: type=2000 audit(1776301510.844:1): state=initialized audit_enabled=0 res=1 Apr 16 01:05:16.917789 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 01:05:16.917795 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 01:05:16.917802 kernel: cpuidle: using governor menu Apr 16 01:05:16.917808 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 01:05:16.917813 kernel: dca service started, version 1.12.1 Apr 16 01:05:16.917819 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 16 01:05:16.917824 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 01:05:16.917830 kernel: PCI: Using configuration type 1 for base access Apr 16 01:05:16.917835 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 01:05:16.917841 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 01:05:16.917846 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 01:05:16.917853 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 01:05:16.917859 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 01:05:16.917864 kernel: ACPI: Added _OSI(Module Device) Apr 16 01:05:16.917869 kernel: ACPI: Added _OSI(Processor Device) Apr 16 01:05:16.917875 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 01:05:16.917880 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 01:05:16.917886 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 16 01:05:16.917892 kernel: ACPI: Interpreter enabled Apr 16 01:05:16.917897 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 01:05:16.917904 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 01:05:16.917909 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 01:05:16.917915 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 01:05:16.917921 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 01:05:16.917926 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 01:05:16.918051 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 01:05:16.918119 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 01:05:16.918179 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 01:05:16.918188 kernel: PCI host bridge to bus 0000:00 Apr 16 01:05:16.918364 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 01:05:16.918421 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 01:05:16.918475 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 01:05:16.918528 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 01:05:16.918582 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 01:05:16.918634 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 16 01:05:16.918760 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 01:05:16.918833 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 16 01:05:16.919083 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 16 01:05:16.919148 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 16 01:05:16.919209 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 16 01:05:16.919375 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 16 01:05:16.919439 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 16 01:05:16.919499 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 01:05:16.919567 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 16 01:05:16.919628 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 16 01:05:16.919764 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 16 01:05:16.919826 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 16 01:05:16.919893 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 16 01:05:16.919958 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 16 01:05:16.920020 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 16 01:05:16.920083 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 16 01:05:16.920149 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 16 01:05:16.920212 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 16 01:05:16.920383 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 16 01:05:16.920443 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 16 01:05:16.920505 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 16 01:05:16.920570 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 16 01:05:16.920630 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 01:05:16.920770 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 16 01:05:16.920832 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 16 01:05:16.920893 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 16 01:05:16.920962 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 16 01:05:16.921022 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 16 01:05:16.921030 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 01:05:16.921035 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 01:05:16.921041 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 01:05:16.921047 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 01:05:16.921053 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 01:05:16.921058 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 01:05:16.921064 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 01:05:16.921071 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 01:05:16.921077 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 01:05:16.921082 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 01:05:16.921088 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 01:05:16.921093 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 01:05:16.921099 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 01:05:16.921104 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 01:05:16.921109 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 01:05:16.921114 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 01:05:16.921122 kernel: iommu: Default domain type: Translated Apr 16 01:05:16.921127 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 01:05:16.921133 kernel: efivars: Registered efivars operations Apr 16 01:05:16.921138 kernel: PCI: Using ACPI for IRQ routing Apr 16 01:05:16.921144 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 01:05:16.921150 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 16 01:05:16.921155 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 16 01:05:16.921161 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 16 01:05:16.921166 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 16 01:05:16.921344 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 01:05:16.921405 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 01:05:16.921463 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 01:05:16.921470 kernel: vgaarb: loaded Apr 16 01:05:16.921476 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 01:05:16.921482 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 01:05:16.921488 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 01:05:16.921493 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 01:05:16.921499 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 01:05:16.921507 kernel: pnp: PnP ACPI init Apr 16 01:05:16.921570 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 01:05:16.921578 kernel: pnp: PnP ACPI: found 6 devices Apr 16 01:05:16.921584 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 01:05:16.921589 kernel: NET: Registered PF_INET protocol family Apr 16 01:05:16.921595 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 01:05:16.921601 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 01:05:16.921606 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 01:05:16.921614 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 01:05:16.921619 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 01:05:16.921625 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 01:05:16.921631 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 01:05:16.921636 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 01:05:16.921707 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 01:05:16.921713 kernel: NET: Registered PF_XDP protocol family Apr 16 01:05:16.921780 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 16 01:05:16.921863 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 16 01:05:16.921921 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 01:05:16.921975 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 01:05:16.922029 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 01:05:16.922083 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 01:05:16.922137 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 01:05:16.922189 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 16 01:05:16.922196 kernel: PCI: CLS 0 bytes, default 64 Apr 16 01:05:16.922204 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 01:05:16.922210 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 01:05:16.922325 kernel: Initialise system trusted keyrings Apr 16 01:05:16.922331 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 01:05:16.922337 kernel: Key type asymmetric registered Apr 16 01:05:16.922342 kernel: Asymmetric key parser 'x509' registered Apr 16 01:05:16.922348 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 16 01:05:16.922353 kernel: io scheduler mq-deadline registered Apr 16 01:05:16.922359 kernel: io scheduler kyber registered Apr 16 01:05:16.922367 kernel: io scheduler bfq registered Apr 16 01:05:16.922372 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 01:05:16.922378 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 01:05:16.922384 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 01:05:16.922390 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 01:05:16.922395 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 01:05:16.922401 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 01:05:16.922406 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 01:05:16.922412 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 01:05:16.922419 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 01:05:16.922723 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 01:05:16.922785 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 01:05:16.922793 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 01:05:16.922848 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T01:05:15 UTC (1776301515) Apr 16 01:05:16.922904 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 16 01:05:16.922911 kernel: intel_pstate: CPU model not supported Apr 16 01:05:16.922916 kernel: efifb: probing for efifb Apr 16 01:05:16.922925 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 16 01:05:16.922931 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 16 01:05:16.922936 kernel: efifb: scrolling: redraw Apr 16 01:05:16.922942 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 16 01:05:16.922947 kernel: Console: switching to colour frame buffer device 100x37 Apr 16 01:05:16.922953 kernel: fb0: EFI VGA frame buffer device Apr 16 01:05:16.922971 kernel: pstore: Using crash dump compression: deflate Apr 16 01:05:16.922978 kernel: pstore: Registered efi_pstore as persistent store backend Apr 16 01:05:16.922984 kernel: NET: Registered PF_INET6 protocol family Apr 16 01:05:16.922991 kernel: Segment Routing with IPv6 Apr 16 01:05:16.922997 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 01:05:16.923004 kernel: NET: Registered PF_PACKET protocol family Apr 16 01:05:16.923010 kernel: Key type dns_resolver registered Apr 16 01:05:16.923016 kernel: IPI shorthand broadcast: enabled Apr 16 01:05:16.923021 kernel: sched_clock: Marking stable (4744179018, 486572568)->(5484283748, -253532162) Apr 16 01:05:16.923027 kernel: registered taskstats version 1 Apr 16 01:05:16.923033 kernel: Loading compiled-in X.509 certificates Apr 16 01:05:16.923038 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6e6d886174c86dc730e1b14e46a1dab518d9b090' Apr 16 01:05:16.923046 kernel: Key type .fscrypt registered Apr 16 01:05:16.923051 kernel: Key type fscrypt-provisioning registered Apr 16 01:05:16.923057 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 01:05:16.923062 kernel: ima: Allocated hash algorithm: sha1 Apr 16 01:05:16.923068 kernel: ima: No architecture policies found Apr 16 01:05:16.923074 kernel: clk: Disabling unused clocks Apr 16 01:05:16.923079 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 16 01:05:16.923085 kernel: Write protecting the kernel read-only data: 36864k Apr 16 01:05:16.923091 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 16 01:05:16.923098 kernel: Run /init as init process Apr 16 01:05:16.923104 kernel: with arguments: Apr 16 01:05:16.923109 kernel: /init Apr 16 01:05:16.923115 kernel: with environment: Apr 16 01:05:16.923120 kernel: HOME=/ Apr 16 01:05:16.923126 kernel: TERM=linux Apr 16 01:05:16.923186 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 01:05:16.923195 systemd[1]: Detected virtualization kvm. Apr 16 01:05:16.923204 systemd[1]: Detected architecture x86-64. Apr 16 01:05:16.923210 systemd[1]: Running in initrd. Apr 16 01:05:16.923308 systemd[1]: No hostname configured, using default hostname. Apr 16 01:05:16.923315 systemd[1]: Hostname set to . Apr 16 01:05:16.923321 systemd[1]: Initializing machine ID from VM UUID. Apr 16 01:05:16.923329 systemd[1]: Queued start job for default target initrd.target. Apr 16 01:05:16.923335 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:05:16.923341 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:05:16.923348 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 01:05:16.923354 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 01:05:16.923360 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 01:05:16.923366 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 01:05:16.923375 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 01:05:16.923383 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 01:05:16.923393 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:05:16.923403 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:05:16.923412 systemd[1]: Reached target paths.target - Path Units. Apr 16 01:05:16.923425 systemd[1]: Reached target slices.target - Slice Units. Apr 16 01:05:16.923443 systemd[1]: Reached target swap.target - Swaps. Apr 16 01:05:16.923449 systemd[1]: Reached target timers.target - Timer Units. Apr 16 01:05:16.923457 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 01:05:16.923463 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 01:05:16.923469 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 01:05:16.923475 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 01:05:16.923481 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:05:16.923488 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 01:05:16.923494 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:05:16.923500 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 01:05:16.923508 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 01:05:16.923514 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 01:05:16.923520 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 01:05:16.923526 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 01:05:16.923532 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 01:05:16.923538 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 01:05:16.923544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:05:16.923566 systemd-journald[194]: Collecting audit messages is disabled. Apr 16 01:05:16.923583 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 01:05:16.923590 systemd-journald[194]: Journal started Apr 16 01:05:16.923605 systemd-journald[194]: Runtime Journal (/run/log/journal/2c91b5d986a042c98dcd7b8c3b589b30) is 6.0M, max 48.3M, 42.2M free. Apr 16 01:05:16.937043 systemd-modules-load[195]: Inserted module 'overlay' Apr 16 01:05:16.951112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:05:16.964321 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 01:05:16.971533 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 01:05:16.988135 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 01:05:16.996489 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 01:05:17.013000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:05:17.025478 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 01:05:17.060712 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 01:05:17.067396 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 01:05:17.067399 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:05:17.069901 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 01:05:17.113546 kernel: Bridge firewalling registered Apr 16 01:05:17.113007 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 16 01:05:17.113454 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:05:17.121184 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 01:05:17.128086 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 01:05:17.140715 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 01:05:17.170632 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:05:17.180203 dracut-cmdline[223]: dracut-dracut-053 Apr 16 01:05:17.190361 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 01:05:17.218481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:05:17.240529 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 01:05:17.282329 systemd-resolved[279]: Positive Trust Anchors: Apr 16 01:05:17.282385 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 01:05:17.282410 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 01:05:17.290077 systemd-resolved[279]: Defaulting to hostname 'linux'. Apr 16 01:05:17.292935 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 01:05:17.367880 kernel: SCSI subsystem initialized Apr 16 01:05:17.304454 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:05:17.382512 kernel: Loading iSCSI transport class v2.0-870. Apr 16 01:05:17.401459 kernel: iscsi: registered transport (tcp) Apr 16 01:05:17.442700 kernel: iscsi: registered transport (qla4xxx) Apr 16 01:05:17.442784 kernel: QLogic iSCSI HBA Driver Apr 16 01:05:17.511061 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 01:05:17.537743 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 01:05:17.605120 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 01:05:17.605732 kernel: device-mapper: uevent: version 1.0.3 Apr 16 01:05:17.613547 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 01:05:17.676937 kernel: raid6: avx512x4 gen() 37710 MB/s Apr 16 01:05:17.696815 kernel: raid6: avx512x2 gen() 36792 MB/s Apr 16 01:05:17.716435 kernel: raid6: avx512x1 gen() 37596 MB/s Apr 16 01:05:17.736877 kernel: raid6: avx2x4 gen() 31776 MB/s Apr 16 01:05:17.756430 kernel: raid6: avx2x2 gen() 32487 MB/s Apr 16 01:05:17.780520 kernel: raid6: avx2x1 gen() 23503 MB/s Apr 16 01:05:17.780792 kernel: raid6: using algorithm avx512x4 gen() 37710 MB/s Apr 16 01:05:17.804591 kernel: raid6: .... xor() 8074 MB/s, rmw enabled Apr 16 01:05:17.804833 kernel: raid6: using avx512x2 recovery algorithm Apr 16 01:05:17.833416 kernel: xor: automatically using best checksumming function avx Apr 16 01:05:18.141493 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 01:05:18.160429 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 01:05:18.186169 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:05:18.204144 systemd-udevd[413]: Using default interface naming scheme 'v255'. Apr 16 01:05:18.209634 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:05:18.212564 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 01:05:18.257969 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Apr 16 01:05:18.304783 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 01:05:18.324804 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 01:05:18.365092 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:05:18.384781 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 01:05:18.414331 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 01:05:18.421459 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 01:05:18.431779 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 01:05:18.439889 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:05:18.456080 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 01:05:18.480843 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 01:05:18.485126 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 01:05:18.485387 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:05:18.518582 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 01:05:18.527557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 01:05:18.528838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:05:18.539530 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:05:18.588580 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 01:05:18.589734 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:05:18.606096 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 01:05:18.648817 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 01:05:18.649013 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 01:05:18.649023 kernel: GPT:9289727 != 19775487 Apr 16 01:05:18.649033 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 01:05:18.649040 kernel: GPT:9289727 != 19775487 Apr 16 01:05:18.649047 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 01:05:18.653558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:05:18.686585 kernel: AVX2 version of gcm_enc/dec engaged. Apr 16 01:05:18.686723 kernel: libata version 3.00 loaded. Apr 16 01:05:18.693344 kernel: AES CTR mode by8 optimization enabled Apr 16 01:05:18.719952 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 01:05:18.720182 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 01:05:18.729989 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 01:05:18.733506 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 01:05:18.785430 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 16 01:05:18.785735 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 01:05:18.806784 kernel: BTRFS: device fsid 936fcbd8-a8ab-4e87-b115-d77c7a08e984 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (467) Apr 16 01:05:18.807004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:05:18.826563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 01:05:18.854055 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Apr 16 01:05:18.854075 kernel: scsi host0: ahci Apr 16 01:05:18.857390 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 01:05:18.874606 kernel: scsi host1: ahci Apr 16 01:05:18.874816 kernel: scsi host2: ahci Apr 16 01:05:18.874893 kernel: scsi host3: ahci Apr 16 01:05:18.874963 kernel: scsi host4: ahci Apr 16 01:05:18.880547 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 01:05:18.886070 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 01:05:18.912049 kernel: scsi host5: ahci Apr 16 01:05:18.912495 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 16 01:05:18.912508 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 16 01:05:18.932920 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 16 01:05:18.933127 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 16 01:05:18.933137 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 16 01:05:18.946577 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 16 01:05:18.950081 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 01:05:18.954528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:05:18.999612 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:05:18.999635 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:05:18.999644 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:05:18.999874 disk-uuid[573]: Primary Header is updated. Apr 16 01:05:18.999874 disk-uuid[573]: Secondary Entries is updated. Apr 16 01:05:18.999874 disk-uuid[573]: Secondary Header is updated. Apr 16 01:05:19.270854 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 01:05:19.271123 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 01:05:19.286198 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 01:05:19.286569 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 01:05:19.292640 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 01:05:19.302977 kernel: ata3.00: applying bridge limits Apr 16 01:05:19.312934 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 01:05:19.313075 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 01:05:19.319774 kernel: ata3.00: configured for UDMA/100 Apr 16 01:05:19.335565 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 01:05:19.404919 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 01:05:19.406576 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 01:05:19.425873 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 01:05:20.002492 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:05:20.005631 disk-uuid[574]: The operation has completed successfully. Apr 16 01:05:20.058598 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 01:05:20.058963 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 01:05:20.103131 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 01:05:20.123078 sh[593]: Success Apr 16 01:05:20.167463 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 16 01:05:20.266797 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 01:05:20.277104 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 01:05:20.297201 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 01:05:20.375637 kernel: BTRFS info (device dm-0): first mount of filesystem 936fcbd8-a8ab-4e87-b115-d77c7a08e984 Apr 16 01:05:20.375913 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:05:20.375922 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 01:05:20.384831 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 01:05:20.391974 kernel: BTRFS info (device dm-0): using free space tree Apr 16 01:05:20.437760 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 01:05:20.446461 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 01:05:20.479748 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 01:05:20.500642 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 01:05:20.603608 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:05:20.603794 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:05:20.603812 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:05:20.628919 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:05:20.654789 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 01:05:20.672126 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:05:20.694860 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 01:05:20.717753 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 01:05:20.937407 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 01:05:20.967768 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 01:05:21.022047 systemd-networkd[779]: lo: Link UP Apr 16 01:05:21.022134 systemd-networkd[779]: lo: Gained carrier Apr 16 01:05:21.024174 systemd-networkd[779]: Enumeration completed Apr 16 01:05:21.024531 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 01:05:21.028349 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:05:21.028351 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 01:05:21.036098 systemd-networkd[779]: eth0: Link UP Apr 16 01:05:21.036101 systemd-networkd[779]: eth0: Gained carrier Apr 16 01:05:21.036108 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:05:21.053903 systemd[1]: Reached target network.target - Network. Apr 16 01:05:21.133017 ignition[723]: Ignition 2.19.0 Apr 16 01:05:21.133022 ignition[723]: Stage: fetch-offline Apr 16 01:05:21.133061 ignition[723]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:05:21.133069 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:05:21.133456 ignition[723]: parsed url from cmdline: "" Apr 16 01:05:21.133460 ignition[723]: no config URL provided Apr 16 01:05:21.133466 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 01:05:21.133480 ignition[723]: no config at "/usr/lib/ignition/user.ign" Apr 16 01:05:21.133509 ignition[723]: op(1): [started] loading QEMU firmware config module Apr 16 01:05:21.133514 ignition[723]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 01:05:21.191378 ignition[723]: op(1): [finished] loading QEMU firmware config module Apr 16 01:05:21.210755 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 01:05:22.073022 ignition[723]: parsing config with SHA512: 526cb11f2e724838fa3d45b81073228af0cf3efd9bc794ab4585f6fd20044177802336d97a52d6e2c74ed6dc0d2c7ce5da8a5b775ff2804a42d7ab00c26a63c9 Apr 16 01:05:22.125574 unknown[723]: fetched base config from "system" Apr 16 01:05:22.125827 unknown[723]: fetched user config from "qemu" Apr 16 01:05:22.140602 ignition[723]: fetch-offline: fetch-offline passed Apr 16 01:05:22.140883 ignition[723]: Ignition finished successfully Apr 16 01:05:22.143090 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 01:05:22.159949 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 01:05:22.170615 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 01:05:22.232132 ignition[787]: Ignition 2.19.0 Apr 16 01:05:22.232213 ignition[787]: Stage: kargs Apr 16 01:05:22.232882 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:05:22.232891 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:05:22.234372 ignition[787]: kargs: kargs passed Apr 16 01:05:22.263093 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 01:05:22.234411 ignition[787]: Ignition finished successfully Apr 16 01:05:22.305788 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 01:05:22.392799 ignition[795]: Ignition 2.19.0 Apr 16 01:05:22.392875 ignition[795]: Stage: disks Apr 16 01:05:22.393058 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:05:22.393066 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:05:22.408813 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 01:05:22.394021 ignition[795]: disks: disks passed Apr 16 01:05:22.416047 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 01:05:22.394059 ignition[795]: Ignition finished successfully Apr 16 01:05:22.444528 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 01:05:22.446850 systemd-networkd[779]: eth0: Gained IPv6LL Apr 16 01:05:22.464855 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 01:05:22.478830 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 01:05:22.499125 systemd[1]: Reached target basic.target - Basic System. Apr 16 01:05:22.536878 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 01:05:22.591820 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 16 01:05:22.602661 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 01:05:22.635611 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 01:05:22.889417 kernel: EXT4-fs (vda9): mounted filesystem 9ac74074-8829-477f-a4c4-5563740ec49b r/w with ordered data mode. Quota mode: none. Apr 16 01:05:22.891424 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 01:05:22.891847 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 01:05:22.935623 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 01:05:22.944556 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 01:05:22.987764 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Apr 16 01:05:22.987793 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:05:22.987803 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:05:22.987813 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:05:22.994917 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 01:05:22.995043 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 01:05:22.995067 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 01:05:23.020614 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 01:05:23.060557 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 01:05:23.086825 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:05:23.080609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 01:05:23.172081 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 01:05:23.186930 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Apr 16 01:05:23.206945 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 01:05:23.228111 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 01:05:23.509170 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 01:05:23.535560 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 01:05:23.553382 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 01:05:23.584639 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 01:05:23.601403 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:05:23.646899 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 01:05:23.681153 ignition[926]: INFO : Ignition 2.19.0 Apr 16 01:05:23.681153 ignition[926]: INFO : Stage: mount Apr 16 01:05:23.695529 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:05:23.695529 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:05:23.695529 ignition[926]: INFO : mount: mount passed Apr 16 01:05:23.695529 ignition[926]: INFO : Ignition finished successfully Apr 16 01:05:23.736496 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 01:05:23.771985 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 01:05:23.903774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 01:05:23.991831 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Apr 16 01:05:23.991944 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:05:24.008512 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:05:24.008570 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:05:24.042967 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:05:24.047509 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 01:05:24.133545 ignition[954]: INFO : Ignition 2.19.0 Apr 16 01:05:24.133545 ignition[954]: INFO : Stage: files Apr 16 01:05:24.146520 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:05:24.146520 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:05:24.166571 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Apr 16 01:05:24.178491 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 01:05:24.178491 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 01:05:24.203600 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 01:05:24.203600 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 01:05:24.203600 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 01:05:24.203600 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 16 01:05:24.203600 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 16 01:05:24.203600 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 01:05:24.203600 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 01:05:24.190085 unknown[954]: wrote ssh authorized keys file for user: core Apr 16 01:05:24.350655 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 16 01:05:24.453460 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 01:05:24.453460 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 01:05:24.485099 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 16 01:05:24.750785 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 16 01:05:25.090387 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 16 01:05:25.090387 ignition[954]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 16 01:05:25.122548 ignition[954]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 16 01:05:25.141341 ignition[954]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 01:05:25.363088 ignition[954]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 01:05:25.380197 ignition[954]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 01:05:25.380197 ignition[954]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 01:05:25.380197 ignition[954]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 16 01:05:25.380197 ignition[954]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 01:05:25.380197 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 01:05:25.440877 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 01:05:25.440877 ignition[954]: INFO : files: files passed Apr 16 01:05:25.440877 ignition[954]: INFO : Ignition finished successfully Apr 16 01:05:25.427159 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 01:05:25.510819 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 01:05:25.524434 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 01:05:25.534101 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 01:05:25.534577 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 01:05:25.593162 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 01:05:25.606152 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:05:25.606152 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:05:25.620090 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:05:25.606968 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 01:05:25.629783 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 01:05:25.691917 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 01:05:25.764812 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 01:05:25.765043 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 01:05:25.786181 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 01:05:25.805471 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 01:05:25.813978 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 01:05:25.816064 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 01:05:25.880483 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 01:05:25.913898 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 01:05:25.938539 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:05:25.959546 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:05:25.971009 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 01:05:25.987090 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 01:05:25.987478 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 01:05:26.016023 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 01:05:26.036899 systemd[1]: Stopped target basic.target - Basic System. Apr 16 01:05:26.044595 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 01:05:26.077788 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 01:05:26.078047 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 01:05:26.098020 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 01:05:26.127754 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 01:05:26.135873 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 01:05:26.171840 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 01:05:26.180795 systemd[1]: Stopped target swap.target - Swaps. Apr 16 01:05:26.200383 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 01:05:26.200520 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 01:05:26.248672 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:05:26.257513 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:05:26.297763 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 01:05:26.308078 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:05:26.320041 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 01:05:26.320149 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 01:05:26.358209 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 01:05:26.358800 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 01:05:26.388563 systemd[1]: Stopped target paths.target - Path Units. Apr 16 01:05:26.389074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 01:05:26.414394 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:05:26.426111 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 01:05:26.445870 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 01:05:26.461961 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 01:05:26.462041 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 01:05:26.479522 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 01:05:26.479786 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 01:05:26.486464 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 01:05:26.486570 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 01:05:26.516892 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 01:05:26.517032 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 01:05:26.586942 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 01:05:26.603925 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 01:05:26.604158 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:05:26.616823 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 01:05:26.662081 ignition[1009]: INFO : Ignition 2.19.0 Apr 16 01:05:26.662081 ignition[1009]: INFO : Stage: umount Apr 16 01:05:26.642645 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 01:05:26.701951 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:05:26.701951 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:05:26.701951 ignition[1009]: INFO : umount: umount passed Apr 16 01:05:26.701951 ignition[1009]: INFO : Ignition finished successfully Apr 16 01:05:26.643529 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:05:26.654400 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 01:05:26.654525 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 01:05:26.680082 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 01:05:26.680457 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 01:05:26.687157 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 01:05:26.687536 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 01:05:26.715360 systemd[1]: Stopped target network.target - Network. Apr 16 01:05:26.740596 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 01:05:26.740794 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 01:05:26.758140 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 01:05:26.758386 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 01:05:26.766035 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 01:05:26.766080 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 01:05:26.794990 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 01:05:26.795056 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 01:05:26.795587 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 01:05:26.815870 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 01:05:26.817588 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 01:05:26.854616 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 01:05:26.854814 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 01:05:26.872054 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 01:05:26.872137 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 01:05:26.874533 systemd-networkd[779]: eth0: DHCPv6 lease lost Apr 16 01:05:26.911188 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 01:05:26.911641 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 01:05:26.942095 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 01:05:26.942558 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 01:05:26.952512 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 01:05:26.952547 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:05:27.033623 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 01:05:27.052495 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 01:05:27.052566 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 01:05:27.081521 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 01:05:27.081664 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:05:27.096809 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 01:05:27.096870 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 01:05:27.106884 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 01:05:27.106930 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:05:27.124994 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:05:27.277900 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 01:05:27.278093 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:05:27.302893 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 01:05:27.303050 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 01:05:27.333093 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 01:05:27.333137 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 01:05:27.351414 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 01:05:27.351499 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:05:27.359019 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 01:05:27.359066 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 01:05:27.395770 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 01:05:27.395838 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 01:05:27.423482 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 01:05:27.423545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:05:27.494839 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 01:05:27.505052 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 01:05:27.505125 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:05:27.505620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 01:05:27.505653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:05:27.517826 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 01:05:27.518005 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 01:05:27.526403 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 01:05:27.536600 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 01:05:27.564998 systemd[1]: Switching root. Apr 16 01:05:27.683894 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 16 01:05:27.684032 systemd-journald[194]: Journal stopped Apr 16 01:05:30.271590 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 01:05:30.271664 kernel: SELinux: policy capability open_perms=1 Apr 16 01:05:30.271683 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 01:05:30.271786 kernel: SELinux: policy capability always_check_network=0 Apr 16 01:05:30.271799 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 01:05:30.271812 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 01:05:30.271824 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 01:05:30.271836 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 01:05:30.271848 kernel: audit: type=1403 audit(1776301528.034:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 01:05:30.271869 systemd[1]: Successfully loaded SELinux policy in 90.415ms. Apr 16 01:05:30.271895 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.005ms. Apr 16 01:05:30.271914 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 01:05:30.271928 systemd[1]: Detected virtualization kvm. Apr 16 01:05:30.271941 systemd[1]: Detected architecture x86-64. Apr 16 01:05:30.271953 systemd[1]: Detected first boot. Apr 16 01:05:30.271965 systemd[1]: Initializing machine ID from VM UUID. Apr 16 01:05:30.271978 zram_generator::config[1067]: No configuration found. Apr 16 01:05:30.271998 systemd[1]: Populated /etc with preset unit settings. Apr 16 01:05:30.272010 systemd[1]: Queued start job for default target multi-user.target. Apr 16 01:05:30.272024 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 01:05:30.272038 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 01:05:30.272051 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 01:05:30.272065 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 01:05:30.272077 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 01:05:30.272092 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 01:05:30.272105 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 01:05:30.272116 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 01:05:30.272128 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 01:05:30.272144 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:05:30.272156 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:05:30.272169 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 01:05:30.272181 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 01:05:30.272193 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 01:05:30.272205 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 01:05:30.272394 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 01:05:30.272415 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:05:30.272429 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 01:05:30.272446 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:05:30.272462 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 01:05:30.272474 systemd[1]: Reached target slices.target - Slice Units. Apr 16 01:05:30.272487 systemd[1]: Reached target swap.target - Swaps. Apr 16 01:05:30.272499 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 01:05:30.272510 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 01:05:30.272522 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 01:05:30.272536 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 01:05:30.272553 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:05:30.272565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 01:05:30.272578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:05:30.272590 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 01:05:30.272602 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 01:05:30.272613 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 01:05:30.272626 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 01:05:30.272639 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:05:30.272650 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 01:05:30.272666 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 01:05:30.272677 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 01:05:30.272689 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 01:05:30.272784 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 01:05:30.272800 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 01:05:30.272811 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 01:05:30.272823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 01:05:30.272834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 01:05:30.272848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 01:05:30.272863 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 01:05:30.272874 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 01:05:30.272888 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 01:05:30.272902 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 16 01:05:30.272914 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 16 01:05:30.272928 kernel: ACPI: bus type drm_connector registered Apr 16 01:05:30.272940 kernel: fuse: init (API version 7.39) Apr 16 01:05:30.272950 kernel: loop: module loaded Apr 16 01:05:30.272966 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 01:05:30.272979 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 01:05:30.273015 systemd-journald[1167]: Collecting audit messages is disabled. Apr 16 01:05:30.273044 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 01:05:30.273064 systemd-journald[1167]: Journal started Apr 16 01:05:30.273087 systemd-journald[1167]: Runtime Journal (/run/log/journal/2c91b5d986a042c98dcd7b8c3b589b30) is 6.0M, max 48.3M, 42.2M free. Apr 16 01:05:30.308557 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 01:05:30.320413 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 01:05:30.341561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:05:30.353615 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 01:05:30.362483 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 01:05:30.366114 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 01:05:30.366452 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 01:05:30.375605 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 01:05:30.384504 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 01:05:30.395066 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 01:05:30.403380 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 01:05:30.413805 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:05:30.422152 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 01:05:30.422573 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 01:05:30.426423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 01:05:30.427804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 01:05:30.428580 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 01:05:30.428860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 01:05:30.437563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 01:05:30.437941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 01:05:30.446468 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 01:05:30.446657 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 01:05:30.455004 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 01:05:30.455532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 01:05:30.464016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 01:05:30.472880 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 01:05:30.484692 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:05:30.505796 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 01:05:30.518416 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 01:05:30.529401 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 01:05:30.530412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 01:05:30.537488 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 01:05:30.552183 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 01:05:30.568815 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 01:05:30.581909 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 01:05:30.592594 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 01:05:30.607149 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 01:05:30.618185 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:05:30.629896 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 01:05:30.641677 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 01:05:30.652483 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 01:05:30.661973 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 16 01:05:30.661986 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Apr 16 01:05:30.662585 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 01:05:30.664935 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 01:05:30.674528 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 16 01:05:30.675148 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 01:05:30.693430 systemd-journald[1167]: Time spent on flushing to /var/log/journal/2c91b5d986a042c98dcd7b8c3b589b30 is 12.912ms for 990 entries. Apr 16 01:05:30.693430 systemd-journald[1167]: System Journal (/var/log/journal/2c91b5d986a042c98dcd7b8c3b589b30) is 8.0M, max 195.6M, 187.6M free. Apr 16 01:05:30.751882 systemd-journald[1167]: Received client request to flush runtime journal. Apr 16 01:05:30.693503 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 01:05:30.710998 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 01:05:30.721432 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 01:05:30.753639 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 01:05:30.765584 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 01:05:30.787476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 01:05:30.812128 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Apr 16 01:05:30.812433 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Apr 16 01:05:30.817652 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:05:31.184476 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 01:05:31.208465 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:05:31.237619 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Apr 16 01:05:31.284038 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:05:31.305627 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 01:05:31.334563 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 01:05:31.358625 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1242) Apr 16 01:05:31.367209 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 16 01:05:31.394913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 01:05:31.454178 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 01:05:31.511046 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 16 01:05:31.524905 kernel: ACPI: button: Power Button [PWRF] Apr 16 01:05:31.540142 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 16 01:05:31.541152 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 01:05:31.557166 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 16 01:05:31.558067 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 01:05:31.547600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:05:31.601514 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 01:05:31.607972 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 01:05:31.609601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:05:31.627851 systemd-networkd[1244]: lo: Link UP Apr 16 01:05:31.628138 systemd-networkd[1244]: lo: Gained carrier Apr 16 01:05:31.629486 systemd-networkd[1244]: Enumeration completed Apr 16 01:05:31.632387 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:05:31.632445 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 01:05:31.634384 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 01:05:31.635098 systemd-networkd[1244]: eth0: Link UP Apr 16 01:05:31.635156 systemd-networkd[1244]: eth0: Gained carrier Apr 16 01:05:31.635188 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:05:31.636496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:05:31.645534 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 01:05:31.690954 systemd-networkd[1244]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 01:05:31.694451 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 01:05:32.279973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:05:32.513961 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 01:05:32.533646 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 01:05:32.559465 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 01:05:32.594862 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 01:05:32.608060 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:05:32.628615 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 01:05:32.650504 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 01:05:32.679520 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 01:05:32.691575 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 01:05:32.702872 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 01:05:32.702975 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 01:05:32.712092 systemd[1]: Reached target machines.target - Containers. Apr 16 01:05:32.721187 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 01:05:32.744454 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 01:05:32.760568 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 01:05:32.769595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 01:05:32.771156 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 01:05:32.783595 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 01:05:32.797597 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 01:05:32.798575 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 01:05:32.821168 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 01:05:32.835872 kernel: loop0: detected capacity change from 0 to 140768 Apr 16 01:05:32.844649 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 01:05:32.845490 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 01:05:32.896676 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 01:05:32.942399 kernel: loop1: detected capacity change from 0 to 142488 Apr 16 01:05:33.013588 kernel: loop2: detected capacity change from 0 to 228704 Apr 16 01:05:33.070875 systemd-networkd[1244]: eth0: Gained IPv6LL Apr 16 01:05:33.077653 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 01:05:33.096082 kernel: loop3: detected capacity change from 0 to 140768 Apr 16 01:05:33.132489 kernel: loop4: detected capacity change from 0 to 142488 Apr 16 01:05:33.212371 kernel: loop5: detected capacity change from 0 to 228704 Apr 16 01:05:33.247622 (sd-merge)[1309]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 01:05:33.248087 (sd-merge)[1309]: Merged extensions into '/usr'. Apr 16 01:05:33.255500 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 01:05:33.255598 systemd[1]: Reloading... Apr 16 01:05:33.333519 zram_generator::config[1334]: No configuration found. Apr 16 01:05:33.435195 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 01:05:33.495982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:05:33.547648 systemd[1]: Reloading finished in 291 ms. Apr 16 01:05:33.576499 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 01:05:33.589695 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 01:05:33.623881 systemd[1]: Starting ensure-sysext.service... Apr 16 01:05:33.631991 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 01:05:33.643970 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Apr 16 01:05:33.644062 systemd[1]: Reloading... Apr 16 01:05:33.668476 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 01:05:33.668693 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 01:05:33.669480 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 01:05:33.669646 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Apr 16 01:05:33.669686 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Apr 16 01:05:33.673176 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 01:05:33.673500 systemd-tmpfiles[1384]: Skipping /boot Apr 16 01:05:33.681054 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 01:05:33.681129 systemd-tmpfiles[1384]: Skipping /boot Apr 16 01:05:33.727461 zram_generator::config[1413]: No configuration found. Apr 16 01:05:33.906632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:05:33.961817 systemd[1]: Reloading finished in 317 ms. Apr 16 01:05:33.997090 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:05:34.027618 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 01:05:34.039133 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 01:05:34.052641 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 01:05:34.066899 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 01:05:34.080474 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 01:05:34.098205 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:05:34.098623 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 01:05:34.103511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 01:05:34.117592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 01:05:34.130455 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 01:05:34.140652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 01:05:34.140854 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:05:34.141952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 01:05:34.142082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 01:05:34.154142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 01:05:34.154435 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 01:05:34.166667 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 01:05:34.175890 augenrules[1485]: No rules Apr 16 01:05:34.180914 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 01:05:34.193656 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 01:05:34.205650 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 01:05:34.206162 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 01:05:34.226168 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 01:05:34.242479 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:05:34.242693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 01:05:34.243144 systemd-resolved[1467]: Positive Trust Anchors: Apr 16 01:05:34.243451 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 01:05:34.243476 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 01:05:34.249694 systemd-resolved[1467]: Defaulting to hostname 'linux'. Apr 16 01:05:34.250608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 01:05:34.261561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 01:05:34.273177 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 01:05:34.282055 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 01:05:34.283945 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 01:05:34.292637 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 01:05:34.292814 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:05:34.293543 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 01:05:34.303623 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 01:05:34.303918 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 01:05:34.315971 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 01:05:34.316377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 01:05:34.327681 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 01:05:34.327920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 01:05:34.338842 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 01:05:34.355672 systemd[1]: Reached target network.target - Network. Apr 16 01:05:34.366671 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 01:05:34.376042 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:05:34.386594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:05:34.386964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 01:05:34.404873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 01:05:34.418367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 01:05:34.427986 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 01:05:34.440408 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 01:05:34.451194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 01:05:34.451639 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 01:05:34.451692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:05:34.453540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 01:05:34.453836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 01:05:34.465818 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 01:05:34.466012 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 01:05:34.475887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 01:05:34.476079 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 01:05:34.487043 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 01:05:34.487403 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 01:05:34.499073 systemd[1]: Finished ensure-sysext.service. Apr 16 01:05:34.517079 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 01:05:34.517368 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 01:05:34.536653 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 01:05:34.599656 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 01:05:34.611117 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 01:05:34.612078 systemd-timesyncd[1529]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 01:05:34.612184 systemd-timesyncd[1529]: Initial clock synchronization to Thu 2026-04-16 01:05:34.709831 UTC. Apr 16 01:05:34.622605 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 01:05:34.633585 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 01:05:34.644588 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 01:05:34.654895 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 01:05:34.654995 systemd[1]: Reached target paths.target - Path Units. Apr 16 01:05:34.662630 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 01:05:34.671381 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 01:05:34.680482 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 01:05:34.690882 systemd[1]: Reached target timers.target - Timer Units. Apr 16 01:05:34.703009 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 01:05:34.714602 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 01:05:34.724016 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 01:05:34.734574 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 01:05:34.744379 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 01:05:34.752875 systemd[1]: Reached target basic.target - Basic System. Apr 16 01:05:34.760863 systemd[1]: System is tainted: cgroupsv1 Apr 16 01:05:34.760895 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 01:05:34.760911 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 01:05:34.762579 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 01:05:34.773135 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 01:05:34.783011 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 01:05:34.792461 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 01:05:34.803599 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 01:05:34.808867 jq[1538]: false Apr 16 01:05:34.813169 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 01:05:34.822503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:05:34.825052 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 01:05:34.834611 extend-filesystems[1540]: Found loop3 Apr 16 01:05:34.851814 extend-filesystems[1540]: Found loop4 Apr 16 01:05:34.851814 extend-filesystems[1540]: Found loop5 Apr 16 01:05:34.851814 extend-filesystems[1540]: Found sr0 Apr 16 01:05:34.851814 extend-filesystems[1540]: Found vda Apr 16 01:05:34.851814 extend-filesystems[1540]: Found vda1 Apr 16 01:05:34.851814 extend-filesystems[1540]: Found vda2 Apr 16 01:05:34.851814 extend-filesystems[1540]: Found vda3 Apr 16 01:05:34.851814 extend-filesystems[1540]: Found usr Apr 16 01:05:34.851814 extend-filesystems[1540]: Found vda4 Apr 16 01:05:34.851814 extend-filesystems[1540]: Found vda6 Apr 16 01:05:34.851814 extend-filesystems[1540]: Found vda7 Apr 16 01:05:34.851814 extend-filesystems[1540]: Found vda9 Apr 16 01:05:34.851814 extend-filesystems[1540]: Checking size of /dev/vda9 Apr 16 01:05:34.960175 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 01:05:34.960199 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1554) Apr 16 01:05:34.844567 dbus-daemon[1537]: [system] SELinux support is enabled Apr 16 01:05:34.872180 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 01:05:34.960624 extend-filesystems[1540]: Resized partition /dev/vda9 Apr 16 01:05:34.983671 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 01:05:34.897471 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 01:05:34.996844 extend-filesystems[1549]: resize2fs 1.47.1 (20-May-2024) Apr 16 01:05:34.932080 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 01:05:35.004671 extend-filesystems[1549]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 01:05:35.004671 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 01:05:35.004671 extend-filesystems[1549]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 01:05:35.016650 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 01:05:35.038839 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Apr 16 01:05:35.056970 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 01:05:35.065792 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 01:05:35.071052 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 01:05:35.083897 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 01:05:35.095513 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 01:05:35.123680 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 01:05:35.123973 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 01:05:35.124509 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 01:05:35.124747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 01:05:35.127859 jq[1580]: true Apr 16 01:05:35.135796 update_engine[1577]: I20260416 01:05:35.135645 1577 main.cc:92] Flatcar Update Engine starting Apr 16 01:05:35.139102 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 01:05:35.141542 update_engine[1577]: I20260416 01:05:35.139912 1577 update_check_scheduler.cc:74] Next update check in 6m40s Apr 16 01:05:35.140513 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 01:05:35.152106 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 01:05:35.162548 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 01:05:35.162900 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 01:05:35.202460 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 01:05:35.210598 systemd-logind[1572]: Watching system buttons on /dev/input/event1 (Power Button) Apr 16 01:05:35.210695 systemd-logind[1572]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 01:05:35.215702 systemd-logind[1572]: New seat seat0. Apr 16 01:05:35.237122 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 01:05:35.251119 jq[1592]: true Apr 16 01:05:35.252757 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 01:05:35.253596 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 01:05:35.254217 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 01:05:35.265692 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 01:05:35.298142 dbus-daemon[1537]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 01:05:35.304705 tar[1591]: linux-amd64/LICENSE Apr 16 01:05:35.304935 tar[1591]: linux-amd64/helm Apr 16 01:05:35.311074 systemd[1]: Started update-engine.service - Update Engine. Apr 16 01:05:35.338579 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 01:05:35.350112 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 01:05:35.350690 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 01:05:35.351131 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 01:05:35.371595 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 01:05:35.372023 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 01:05:35.388605 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 01:05:35.389904 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 01:05:35.408960 bash[1637]: Updated "/home/core/.ssh/authorized_keys" Apr 16 01:05:35.418688 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 01:05:35.437984 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 01:05:35.438980 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 01:05:35.453765 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 01:05:35.468789 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 01:05:35.521751 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 01:05:35.546483 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 01:05:35.572938 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 01:05:35.593746 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 01:05:35.619410 locksmithd[1638]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 01:05:35.629976 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 01:05:35.654046 systemd[1]: Started sshd@0-10.0.0.62:22-10.0.0.1:49706.service - OpenSSH per-connection server daemon (10.0.0.1:49706). Apr 16 01:05:35.694010 containerd[1593]: time="2026-04-16T01:05:35.692878135Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 01:05:35.744079 containerd[1593]: time="2026-04-16T01:05:35.743846051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:05:35.750451 containerd[1593]: time="2026-04-16T01:05:35.750218492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:05:35.750536 containerd[1593]: time="2026-04-16T01:05:35.750526990Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 01:05:35.750618 containerd[1593]: time="2026-04-16T01:05:35.750608916Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 01:05:35.751006 containerd[1593]: time="2026-04-16T01:05:35.750752739Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 01:05:35.751006 containerd[1593]: time="2026-04-16T01:05:35.750766727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 01:05:35.751006 containerd[1593]: time="2026-04-16T01:05:35.750938808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:05:35.751006 containerd[1593]: time="2026-04-16T01:05:35.750950318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:05:35.751597 containerd[1593]: time="2026-04-16T01:05:35.751580455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:05:35.751643 containerd[1593]: time="2026-04-16T01:05:35.751635972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 01:05:35.751674 containerd[1593]: time="2026-04-16T01:05:35.751666688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:05:35.751699 containerd[1593]: time="2026-04-16T01:05:35.751693637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 01:05:35.751797 containerd[1593]: time="2026-04-16T01:05:35.751786795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:05:35.754110 containerd[1593]: time="2026-04-16T01:05:35.752123088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:05:35.754110 containerd[1593]: time="2026-04-16T01:05:35.753529306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:05:35.754110 containerd[1593]: time="2026-04-16T01:05:35.753549111Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 01:05:35.754110 containerd[1593]: time="2026-04-16T01:05:35.753832377Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 01:05:35.754110 containerd[1593]: time="2026-04-16T01:05:35.753998710Z" level=info msg="metadata content store policy set" policy=shared Apr 16 01:05:35.770405 containerd[1593]: time="2026-04-16T01:05:35.770371630Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 01:05:35.770519 containerd[1593]: time="2026-04-16T01:05:35.770508823Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 01:05:35.770559 containerd[1593]: time="2026-04-16T01:05:35.770552793Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 01:05:35.771159 containerd[1593]: time="2026-04-16T01:05:35.770586889Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 01:05:35.771414 containerd[1593]: time="2026-04-16T01:05:35.771400392Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 01:05:35.771595 containerd[1593]: time="2026-04-16T01:05:35.771582657Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 01:05:35.772531 containerd[1593]: time="2026-04-16T01:05:35.772515140Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 01:05:35.773445 containerd[1593]: time="2026-04-16T01:05:35.773426847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 01:05:35.773517 containerd[1593]: time="2026-04-16T01:05:35.773508189Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 01:05:35.773552 containerd[1593]: time="2026-04-16T01:05:35.773545813Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 01:05:35.773585 containerd[1593]: time="2026-04-16T01:05:35.773578361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 01:05:35.773614 containerd[1593]: time="2026-04-16T01:05:35.773607974Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 01:05:35.773648 containerd[1593]: time="2026-04-16T01:05:35.773641555Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 01:05:35.773679 containerd[1593]: time="2026-04-16T01:05:35.773672992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 01:05:35.773709 containerd[1593]: time="2026-04-16T01:05:35.773702282Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 01:05:35.773737 containerd[1593]: time="2026-04-16T01:05:35.773731305Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 01:05:35.773772 containerd[1593]: time="2026-04-16T01:05:35.773766352Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 01:05:35.773802 containerd[1593]: time="2026-04-16T01:05:35.773795743Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 01:05:35.773845 containerd[1593]: time="2026-04-16T01:05:35.773838195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.773876 containerd[1593]: time="2026-04-16T01:05:35.773869905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.773905 containerd[1593]: time="2026-04-16T01:05:35.773899066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.773938 containerd[1593]: time="2026-04-16T01:05:35.773932234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.773998 containerd[1593]: time="2026-04-16T01:05:35.773992003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.774646 containerd[1593]: time="2026-04-16T01:05:35.774025765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.774646 containerd[1593]: time="2026-04-16T01:05:35.774035614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.774646 containerd[1593]: time="2026-04-16T01:05:35.774045156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.774646 containerd[1593]: time="2026-04-16T01:05:35.774054756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.774646 containerd[1593]: time="2026-04-16T01:05:35.774070891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.774926 containerd[1593]: time="2026-04-16T01:05:35.774913135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.775381 containerd[1593]: time="2026-04-16T01:05:35.775369243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.775633 containerd[1593]: time="2026-04-16T01:05:35.775623084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.775673 containerd[1593]: time="2026-04-16T01:05:35.775666392Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 01:05:35.775711 containerd[1593]: time="2026-04-16T01:05:35.775705090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.775754 containerd[1593]: time="2026-04-16T01:05:35.775747228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.775782 containerd[1593]: time="2026-04-16T01:05:35.775776728Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 01:05:35.776746 containerd[1593]: time="2026-04-16T01:05:35.776731002Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 01:05:35.776806 containerd[1593]: time="2026-04-16T01:05:35.776796799Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 01:05:35.776835 containerd[1593]: time="2026-04-16T01:05:35.776828943Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 01:05:35.776865 containerd[1593]: time="2026-04-16T01:05:35.776857947Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 01:05:35.776906 containerd[1593]: time="2026-04-16T01:05:35.776898829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.776937 containerd[1593]: time="2026-04-16T01:05:35.776930406Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 01:05:35.776965 containerd[1593]: time="2026-04-16T01:05:35.776959865Z" level=info msg="NRI interface is disabled by configuration." Apr 16 01:05:35.776994 containerd[1593]: time="2026-04-16T01:05:35.776987931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 01:05:35.779973 containerd[1593]: time="2026-04-16T01:05:35.779925198Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 01:05:35.781508 containerd[1593]: time="2026-04-16T01:05:35.780533197Z" level=info msg="Connect containerd service" Apr 16 01:05:35.781508 containerd[1593]: time="2026-04-16T01:05:35.780570559Z" level=info msg="using legacy CRI server" Apr 16 01:05:35.781508 containerd[1593]: time="2026-04-16T01:05:35.780577702Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 01:05:35.781508 containerd[1593]: time="2026-04-16T01:05:35.780657568Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 01:05:35.784879 containerd[1593]: time="2026-04-16T01:05:35.783945404Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 01:05:35.785124 containerd[1593]: time="2026-04-16T01:05:35.784965784Z" level=info msg="Start subscribing containerd event" Apr 16 01:05:35.785124 containerd[1593]: time="2026-04-16T01:05:35.785012719Z" level=info msg="Start recovering state" Apr 16 01:05:35.785430 containerd[1593]: time="2026-04-16T01:05:35.785419097Z" level=info msg="Start event monitor" Apr 16 01:05:35.785737 containerd[1593]: time="2026-04-16T01:05:35.785472209Z" level=info msg="Start snapshots syncer" Apr 16 01:05:35.785737 containerd[1593]: time="2026-04-16T01:05:35.785482170Z" level=info msg="Start cni network conf syncer for default" Apr 16 01:05:35.785737 containerd[1593]: time="2026-04-16T01:05:35.785487660Z" level=info msg="Start streaming server" Apr 16 01:05:35.785737 containerd[1593]: time="2026-04-16T01:05:35.785446656Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 01:05:35.785737 containerd[1593]: time="2026-04-16T01:05:35.785605936Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 01:05:35.811831 containerd[1593]: time="2026-04-16T01:05:35.805187918Z" level=info msg="containerd successfully booted in 0.114669s" Apr 16 01:05:35.806036 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 01:05:35.822829 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 49706 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:05:35.831006 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:05:35.848950 systemd-logind[1572]: New session 1 of user core. Apr 16 01:05:35.849738 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 01:05:35.875011 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 01:05:35.904878 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 01:05:35.930719 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 01:05:35.947989 (systemd)[1670]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 01:05:36.078630 systemd[1670]: Queued start job for default target default.target. Apr 16 01:05:36.079448 systemd[1670]: Created slice app.slice - User Application Slice. Apr 16 01:05:36.079570 systemd[1670]: Reached target paths.target - Paths. Apr 16 01:05:36.079581 systemd[1670]: Reached target timers.target - Timers. Apr 16 01:05:36.088541 systemd[1670]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 01:05:36.108853 systemd[1670]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 01:05:36.109144 systemd[1670]: Reached target sockets.target - Sockets. Apr 16 01:05:36.109200 systemd[1670]: Reached target basic.target - Basic System. Apr 16 01:05:36.109508 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 01:05:36.112375 systemd[1670]: Reached target default.target - Main User Target. Apr 16 01:05:36.112419 systemd[1670]: Startup finished in 149ms. Apr 16 01:05:36.131503 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 01:05:36.151199 tar[1591]: linux-amd64/README.md Apr 16 01:05:36.187135 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 01:05:36.249806 systemd[1]: Started sshd@1-10.0.0.62:22-10.0.0.1:49712.service - OpenSSH per-connection server daemon (10.0.0.1:49712). Apr 16 01:05:36.346175 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 49712 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:05:36.347912 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:05:36.357054 systemd-logind[1572]: New session 2 of user core. Apr 16 01:05:36.366465 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 01:05:36.473800 sshd[1687]: pam_unix(sshd:session): session closed for user core Apr 16 01:05:36.487919 systemd[1]: Started sshd@2-10.0.0.62:22-10.0.0.1:49714.service - OpenSSH per-connection server daemon (10.0.0.1:49714). Apr 16 01:05:36.501662 systemd[1]: sshd@1-10.0.0.62:22-10.0.0.1:49712.service: Deactivated successfully. Apr 16 01:05:36.504680 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 01:05:36.505736 systemd-logind[1572]: Session 2 logged out. Waiting for processes to exit. Apr 16 01:05:36.511387 systemd-logind[1572]: Removed session 2. Apr 16 01:05:36.562548 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 49714 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:05:36.566742 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:05:36.577789 systemd-logind[1572]: New session 3 of user core. Apr 16 01:05:36.590023 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 01:05:36.677603 sshd[1692]: pam_unix(sshd:session): session closed for user core Apr 16 01:05:36.683766 systemd[1]: sshd@2-10.0.0.62:22-10.0.0.1:49714.service: Deactivated successfully. Apr 16 01:05:36.686776 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 01:05:36.686791 systemd-logind[1572]: Session 3 logged out. Waiting for processes to exit. Apr 16 01:05:36.688935 systemd-logind[1572]: Removed session 3. Apr 16 01:05:36.753727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:05:36.766742 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 01:05:36.767073 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:05:36.778501 systemd[1]: Startup finished in 16.812s (kernel) + 8.832s (userspace) = 25.645s. Apr 16 01:05:38.110943 kubelet[1711]: E0416 01:05:38.110524 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:05:38.115168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:05:38.115827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:05:46.762972 systemd[1]: Started sshd@3-10.0.0.62:22-10.0.0.1:42484.service - OpenSSH per-connection server daemon (10.0.0.1:42484). Apr 16 01:05:46.825546 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 42484 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:05:46.831906 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:05:46.848910 systemd-logind[1572]: New session 4 of user core. Apr 16 01:05:46.870943 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 01:05:46.960951 sshd[1725]: pam_unix(sshd:session): session closed for user core Apr 16 01:05:46.970008 systemd[1]: Started sshd@4-10.0.0.62:22-10.0.0.1:42488.service - OpenSSH per-connection server daemon (10.0.0.1:42488). Apr 16 01:05:46.970659 systemd[1]: sshd@3-10.0.0.62:22-10.0.0.1:42484.service: Deactivated successfully. Apr 16 01:05:46.974992 systemd-logind[1572]: Session 4 logged out. Waiting for processes to exit. Apr 16 01:05:46.977167 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 01:05:46.982172 systemd-logind[1572]: Removed session 4. Apr 16 01:05:47.045561 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 42488 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:05:47.047945 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:05:47.065989 systemd-logind[1572]: New session 5 of user core. Apr 16 01:05:47.099652 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 01:05:47.202132 sshd[1730]: pam_unix(sshd:session): session closed for user core Apr 16 01:05:47.233446 kernel: hrtimer: interrupt took 6285737 ns Apr 16 01:05:47.295162 systemd[1]: Started sshd@5-10.0.0.62:22-10.0.0.1:42500.service - OpenSSH per-connection server daemon (10.0.0.1:42500). Apr 16 01:05:47.333201 systemd[1]: sshd@4-10.0.0.62:22-10.0.0.1:42488.service: Deactivated successfully. Apr 16 01:05:47.465109 systemd-logind[1572]: Session 5 logged out. Waiting for processes to exit. Apr 16 01:05:47.529882 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 01:05:47.568736 systemd-logind[1572]: Removed session 5. Apr 16 01:05:48.325812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 01:05:48.434979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:05:48.677208 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 42500 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:05:48.722054 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:05:48.801997 systemd-logind[1572]: New session 6 of user core. Apr 16 01:05:48.863048 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 01:05:49.044986 sshd[1738]: pam_unix(sshd:session): session closed for user core Apr 16 01:05:49.129925 systemd[1]: Started sshd@6-10.0.0.62:22-10.0.0.1:42508.service - OpenSSH per-connection server daemon (10.0.0.1:42508). Apr 16 01:05:49.136854 systemd[1]: sshd@5-10.0.0.62:22-10.0.0.1:42500.service: Deactivated successfully. Apr 16 01:05:49.154043 systemd-logind[1572]: Session 6 logged out. Waiting for processes to exit. Apr 16 01:05:49.172930 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 01:05:49.190846 systemd-logind[1572]: Removed session 6. Apr 16 01:05:49.347496 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 42508 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:05:49.352969 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:05:49.405655 systemd-logind[1572]: New session 7 of user core. Apr 16 01:05:49.420717 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 01:05:49.521867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:05:49.549995 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:05:49.810812 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 01:05:49.816732 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:05:49.913464 sudo[1763]: pam_unix(sudo:session): session closed for user root Apr 16 01:05:49.950452 sshd[1750]: pam_unix(sshd:session): session closed for user core Apr 16 01:05:50.107137 systemd[1]: Started sshd@7-10.0.0.62:22-10.0.0.1:33122.service - OpenSSH per-connection server daemon (10.0.0.1:33122). Apr 16 01:05:50.113004 systemd[1]: sshd@6-10.0.0.62:22-10.0.0.1:42508.service: Deactivated successfully. Apr 16 01:05:50.140674 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 01:05:50.147496 systemd-logind[1572]: Session 7 logged out. Waiting for processes to exit. Apr 16 01:05:50.188200 systemd-logind[1572]: Removed session 7. Apr 16 01:05:50.433406 kubelet[1765]: E0416 01:05:50.429865 1765 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:05:50.470519 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 33122 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:05:50.472448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:05:50.472846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:05:50.478137 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:05:50.559832 systemd-logind[1572]: New session 8 of user core. Apr 16 01:05:51.083081 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 01:05:52.146143 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 01:05:52.156785 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:05:52.203832 sudo[1784]: pam_unix(sudo:session): session closed for user root Apr 16 01:05:52.226153 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 16 01:05:52.227048 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:05:52.393553 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 16 01:05:52.414984 auditctl[1787]: No rules Apr 16 01:05:52.422121 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 01:05:52.423129 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 16 01:05:52.444563 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 01:05:52.693533 augenrules[1806]: No rules Apr 16 01:05:52.698498 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 01:05:52.700780 sudo[1783]: pam_unix(sudo:session): session closed for user root Apr 16 01:05:52.706782 sshd[1775]: pam_unix(sshd:session): session closed for user core Apr 16 01:05:52.716670 systemd[1]: sshd@7-10.0.0.62:22-10.0.0.1:33122.service: Deactivated successfully. Apr 16 01:05:52.730986 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 01:05:52.731733 systemd-logind[1572]: Session 8 logged out. Waiting for processes to exit. Apr 16 01:05:52.786720 systemd[1]: Started sshd@8-10.0.0.62:22-10.0.0.1:33128.service - OpenSSH per-connection server daemon (10.0.0.1:33128). Apr 16 01:05:52.791833 systemd-logind[1572]: Removed session 8. Apr 16 01:05:53.005646 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 33128 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:05:53.025137 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:05:53.057976 systemd-logind[1572]: New session 9 of user core. Apr 16 01:05:53.077197 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 01:05:53.172080 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 01:05:53.176831 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:05:54.291985 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 01:05:54.296479 (dockerd)[1837]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 01:05:55.640621 dockerd[1837]: time="2026-04-16T01:05:55.640107061Z" level=info msg="Starting up" Apr 16 01:05:56.325918 dockerd[1837]: time="2026-04-16T01:05:56.324893908Z" level=info msg="Loading containers: start." Apr 16 01:05:57.222649 kernel: Initializing XFRM netlink socket Apr 16 01:05:57.811181 systemd-networkd[1244]: docker0: Link UP Apr 16 01:05:57.881101 dockerd[1837]: time="2026-04-16T01:05:57.880947288Z" level=info msg="Loading containers: done." Apr 16 01:05:57.958960 dockerd[1837]: time="2026-04-16T01:05:57.958807292Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 01:05:57.959174 dockerd[1837]: time="2026-04-16T01:05:57.959110784Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 01:05:57.959500 dockerd[1837]: time="2026-04-16T01:05:57.959476332Z" level=info msg="Daemon has completed initialization" Apr 16 01:05:58.242616 dockerd[1837]: time="2026-04-16T01:05:58.239089053Z" level=info msg="API listen on /run/docker.sock" Apr 16 01:05:58.241701 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 01:06:00.309803 containerd[1593]: time="2026-04-16T01:06:00.307762643Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 16 01:06:00.546623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 01:06:00.590852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:06:01.009770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:06:01.019812 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:06:01.345780 kubelet[1999]: E0416 01:06:01.344702 1999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:06:01.352084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:06:01.353209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:06:01.636737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203770499.mount: Deactivated successfully. Apr 16 01:06:06.079940 containerd[1593]: time="2026-04-16T01:06:06.079508730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:06.081694 containerd[1593]: time="2026-04-16T01:06:06.080994381Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 16 01:06:06.083634 containerd[1593]: time="2026-04-16T01:06:06.083591924Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:06.096405 containerd[1593]: time="2026-04-16T01:06:06.094885646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:06.096523 containerd[1593]: time="2026-04-16T01:06:06.096499455Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 5.788191752s" Apr 16 01:06:06.096569 containerd[1593]: time="2026-04-16T01:06:06.096561061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 16 01:06:06.101575 containerd[1593]: time="2026-04-16T01:06:06.101117466Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 16 01:06:11.482053 containerd[1593]: time="2026-04-16T01:06:11.481028717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:11.487019 containerd[1593]: time="2026-04-16T01:06:11.486814626Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 16 01:06:11.494144 containerd[1593]: time="2026-04-16T01:06:11.492961940Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:11.507629 containerd[1593]: time="2026-04-16T01:06:11.506666170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:11.510173 containerd[1593]: time="2026-04-16T01:06:11.509891407Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 5.408744416s" Apr 16 01:06:11.510173 containerd[1593]: time="2026-04-16T01:06:11.510129301Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 16 01:06:11.514594 containerd[1593]: time="2026-04-16T01:06:11.514468472Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 16 01:06:11.543887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 01:06:11.564917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:06:11.958939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:06:11.968083 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:06:12.187749 kubelet[2083]: E0416 01:06:12.186788 2083 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:06:12.193189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:06:12.195199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:06:18.814703 containerd[1593]: time="2026-04-16T01:06:18.814496645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:18.819773 containerd[1593]: time="2026-04-16T01:06:18.818812606Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 16 01:06:18.822603 containerd[1593]: time="2026-04-16T01:06:18.822463556Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:18.826793 containerd[1593]: time="2026-04-16T01:06:18.826644600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:18.831689 containerd[1593]: time="2026-04-16T01:06:18.831433301Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 7.316769086s" Apr 16 01:06:18.831689 containerd[1593]: time="2026-04-16T01:06:18.831459623Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 16 01:06:18.834174 containerd[1593]: time="2026-04-16T01:06:18.833851859Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 16 01:06:20.164709 update_engine[1577]: I20260416 01:06:20.160557 1577 update_attempter.cc:509] Updating boot flags... Apr 16 01:06:20.681504 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2108) Apr 16 01:06:20.804902 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2108) Apr 16 01:06:22.338900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 01:06:23.115179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:06:23.848046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:06:23.944736 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:06:24.520713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2098678186.mount: Deactivated successfully. Apr 16 01:06:24.940542 kubelet[2126]: E0416 01:06:24.939783 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:06:24.944741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:06:24.945007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:06:28.031137 containerd[1593]: time="2026-04-16T01:06:28.030134554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:28.033934 containerd[1593]: time="2026-04-16T01:06:28.033158573Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 16 01:06:28.036860 containerd[1593]: time="2026-04-16T01:06:28.036812261Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:28.042437 containerd[1593]: time="2026-04-16T01:06:28.042087487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:28.045065 containerd[1593]: time="2026-04-16T01:06:28.043769508Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 9.20988461s" Apr 16 01:06:28.045065 containerd[1593]: time="2026-04-16T01:06:28.043931583Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 16 01:06:28.046006 containerd[1593]: time="2026-04-16T01:06:28.045834850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 16 01:06:28.898193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937225013.mount: Deactivated successfully. Apr 16 01:06:33.737989 containerd[1593]: time="2026-04-16T01:06:33.735558076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:33.741154 containerd[1593]: time="2026-04-16T01:06:33.739165411Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 16 01:06:33.747205 containerd[1593]: time="2026-04-16T01:06:33.746624530Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:33.761482 containerd[1593]: time="2026-04-16T01:06:33.760211240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:33.762724 containerd[1593]: time="2026-04-16T01:06:33.762577697Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 5.716562593s" Apr 16 01:06:33.762724 containerd[1593]: time="2026-04-16T01:06:33.762669569Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 16 01:06:33.767565 containerd[1593]: time="2026-04-16T01:06:33.766714843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 16 01:06:34.781477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663118450.mount: Deactivated successfully. Apr 16 01:06:34.881051 containerd[1593]: time="2026-04-16T01:06:34.880166863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:34.882728 containerd[1593]: time="2026-04-16T01:06:34.882605741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 01:06:34.886689 containerd[1593]: time="2026-04-16T01:06:34.886657927Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:34.916750 containerd[1593]: time="2026-04-16T01:06:34.916199212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:34.922055 containerd[1593]: time="2026-04-16T01:06:34.917794475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.150757691s" Apr 16 01:06:34.922055 containerd[1593]: time="2026-04-16T01:06:34.920029346Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 16 01:06:34.929026 containerd[1593]: time="2026-04-16T01:06:34.927633639Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 16 01:06:35.053156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 16 01:06:35.103709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:06:36.627814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:06:36.698087 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:06:36.829179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498034792.mount: Deactivated successfully. Apr 16 01:06:37.263876 kubelet[2212]: E0416 01:06:37.263105 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:06:37.329611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:06:37.331129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:06:41.383912 containerd[1593]: time="2026-04-16T01:06:41.382611491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:41.387762 containerd[1593]: time="2026-04-16T01:06:41.387555826Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 16 01:06:41.397732 containerd[1593]: time="2026-04-16T01:06:41.396706708Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:41.407746 containerd[1593]: time="2026-04-16T01:06:41.407699618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:06:41.409005 containerd[1593]: time="2026-04-16T01:06:41.408805901Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 6.481084991s" Apr 16 01:06:41.409456 containerd[1593]: time="2026-04-16T01:06:41.409013833Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 16 01:06:47.541188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 16 01:06:47.555060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:06:47.797912 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 01:06:47.797979 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 01:06:47.798691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:06:47.815113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:06:47.868955 systemd[1]: Reloading requested from client PID 2320 ('systemctl') (unit session-9.scope)... Apr 16 01:06:47.869475 systemd[1]: Reloading... Apr 16 01:06:48.090809 zram_generator::config[2358]: No configuration found. Apr 16 01:06:48.369119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:06:48.486688 systemd[1]: Reloading finished in 616 ms. Apr 16 01:06:48.622768 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 01:06:48.623019 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 01:06:48.623744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:06:48.645952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:06:49.014157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:06:49.049796 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 01:06:49.365083 kubelet[2419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:06:49.365083 kubelet[2419]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 01:06:49.365083 kubelet[2419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:06:49.365083 kubelet[2419]: I0416 01:06:49.364929 2419 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 01:06:50.770940 kubelet[2419]: I0416 01:06:50.770689 2419 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 01:06:50.770940 kubelet[2419]: I0416 01:06:50.770837 2419 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 01:06:50.771947 kubelet[2419]: I0416 01:06:50.771834 2419 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 01:06:50.880931 kubelet[2419]: I0416 01:06:50.879695 2419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 01:06:50.883089 kubelet[2419]: E0416 01:06:50.883064 2419 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 01:06:50.905456 kubelet[2419]: E0416 01:06:50.905084 2419 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 01:06:50.906054 kubelet[2419]: I0416 01:06:50.905789 2419 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 16 01:06:50.938113 kubelet[2419]: I0416 01:06:50.937717 2419 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 01:06:50.941202 kubelet[2419]: I0416 01:06:50.940823 2419 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 01:06:50.941202 kubelet[2419]: I0416 01:06:50.940978 2419 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 16 01:06:50.946858 kubelet[2419]: I0416 01:06:50.941639 2419 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 01:06:50.946858 kubelet[2419]: I0416 01:06:50.941651 2419 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 01:06:50.946858 kubelet[2419]: I0416 01:06:50.941922 2419 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:06:50.960105 kubelet[2419]: I0416 01:06:50.959841 2419 kubelet.go:480] "Attempting to sync node with API server" Apr 16 01:06:50.960105 kubelet[2419]: I0416 01:06:50.960003 2419 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 01:06:50.960105 kubelet[2419]: I0416 01:06:50.960038 2419 kubelet.go:386] "Adding apiserver pod source" Apr 16 01:06:50.960105 kubelet[2419]: I0416 01:06:50.960104 2419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 01:06:50.973627 kubelet[2419]: I0416 01:06:50.972762 2419 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 01:06:50.973627 kubelet[2419]: E0416 01:06:50.972824 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:06:50.974081 kubelet[2419]: I0416 01:06:50.973816 2419 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 01:06:50.975747 kubelet[2419]: E0416 01:06:50.975726 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:06:50.976032 kubelet[2419]: W0416 01:06:50.975881 2419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 01:06:50.998811 kubelet[2419]: I0416 01:06:50.997086 2419 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 01:06:50.998811 kubelet[2419]: I0416 01:06:50.998036 2419 server.go:1289] "Started kubelet" Apr 16 01:06:50.998811 kubelet[2419]: I0416 01:06:50.998971 2419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 01:06:51.002846 kubelet[2419]: I0416 01:06:51.001936 2419 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 01:06:51.033916 kubelet[2419]: I0416 01:06:51.032855 2419 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 01:06:51.035926 kubelet[2419]: I0416 01:06:51.035779 2419 server.go:317] "Adding debug handlers to kubelet server" Apr 16 01:06:51.046056 kubelet[2419]: E0416 01:06:51.036064 2419 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.62:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.62:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b0e7697d4fea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 01:06:50.99799345 +0000 UTC m=+1.923820035,LastTimestamp:2026-04-16 01:06:50.99799345 +0000 UTC m=+1.923820035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 01:06:51.049932 kubelet[2419]: I0416 01:06:51.048927 2419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 01:06:51.049932 kubelet[2419]: I0416 01:06:51.049162 2419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 01:06:51.056900 kubelet[2419]: I0416 01:06:51.056096 2419 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 01:06:51.056900 kubelet[2419]: E0416 01:06:51.056819 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:06:51.060698 kubelet[2419]: I0416 01:06:51.057041 2419 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 01:06:51.060698 kubelet[2419]: I0416 01:06:51.057081 2419 reconciler.go:26] "Reconciler: start to sync state" Apr 16 01:06:51.064723 kubelet[2419]: E0416 01:06:51.063920 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:06:51.064723 kubelet[2419]: E0416 01:06:51.064152 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="200ms" Apr 16 01:06:51.079690 kubelet[2419]: I0416 01:06:51.077838 2419 factory.go:223] Registration of the systemd container factory successfully Apr 16 01:06:51.079690 kubelet[2419]: I0416 01:06:51.079026 2419 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 01:06:51.080761 kubelet[2419]: E0416 01:06:51.078877 2419 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 01:06:51.085878 kubelet[2419]: I0416 01:06:51.085865 2419 factory.go:223] Registration of the containerd container factory successfully Apr 16 01:06:51.153830 kubelet[2419]: I0416 01:06:51.153806 2419 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 01:06:51.154140 kubelet[2419]: I0416 01:06:51.154130 2419 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 01:06:51.154192 kubelet[2419]: I0416 01:06:51.154187 2419 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:06:51.158879 kubelet[2419]: E0416 01:06:51.157552 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:06:51.203042 kubelet[2419]: I0416 01:06:51.202918 2419 policy_none.go:49] "None policy: Start" Apr 16 01:06:51.203042 kubelet[2419]: I0416 01:06:51.202946 2419 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 01:06:51.203042 kubelet[2419]: I0416 01:06:51.202956 2419 state_mem.go:35] "Initializing new in-memory state store" Apr 16 01:06:51.235481 kubelet[2419]: E0416 01:06:51.232120 2419 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 01:06:51.235481 kubelet[2419]: I0416 01:06:51.232956 2419 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 01:06:51.235481 kubelet[2419]: I0416 01:06:51.232967 2419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 01:06:51.235481 kubelet[2419]: I0416 01:06:51.234819 2419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 01:06:51.241900 kubelet[2419]: E0416 01:06:51.241184 2419 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 01:06:51.242873 kubelet[2419]: E0416 01:06:51.242744 2419 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 01:06:51.250006 kubelet[2419]: I0416 01:06:51.249851 2419 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 01:06:51.262021 kubelet[2419]: I0416 01:06:51.260959 2419 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 01:06:51.262021 kubelet[2419]: I0416 01:06:51.261104 2419 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 01:06:51.262021 kubelet[2419]: I0416 01:06:51.261121 2419 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 01:06:51.262021 kubelet[2419]: I0416 01:06:51.261127 2419 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 01:06:51.262021 kubelet[2419]: E0416 01:06:51.261165 2419 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 16 01:06:51.263702 kubelet[2419]: E0416 01:06:51.262796 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:06:51.270790 kubelet[2419]: E0416 01:06:51.270052 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="400ms" Apr 16 01:06:51.348738 kubelet[2419]: I0416 01:06:51.348024 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:06:51.350841 kubelet[2419]: E0416 01:06:51.350107 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Apr 16 01:06:51.383978 kubelet[2419]: E0416 01:06:51.382184 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:51.397979 kubelet[2419]: E0416 01:06:51.397154 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:51.414854 kubelet[2419]: E0416 01:06:51.412978 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:51.461975 kubelet[2419]: I0416 01:06:51.461696 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:06:51.461975 kubelet[2419]: I0416 01:06:51.461867 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73c4c341181648b233d54d97a9f2a6eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"73c4c341181648b233d54d97a9f2a6eb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:06:51.461975 kubelet[2419]: I0416 01:06:51.461883 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73c4c341181648b233d54d97a9f2a6eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"73c4c341181648b233d54d97a9f2a6eb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:06:51.461975 kubelet[2419]: I0416 01:06:51.461897 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:06:51.461975 kubelet[2419]: I0416 01:06:51.461908 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:06:51.462901 kubelet[2419]: I0416 01:06:51.461921 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 16 01:06:51.462901 kubelet[2419]: I0416 01:06:51.461933 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73c4c341181648b233d54d97a9f2a6eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"73c4c341181648b233d54d97a9f2a6eb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:06:51.462901 kubelet[2419]: I0416 01:06:51.461944 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:06:51.462901 kubelet[2419]: I0416 01:06:51.461955 2419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:06:51.564847 kubelet[2419]: I0416 01:06:51.562862 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:06:51.564847 kubelet[2419]: E0416 01:06:51.564871 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Apr 16 01:06:51.680785 kubelet[2419]: E0416 01:06:51.677130 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="800ms" Apr 16 01:06:51.686539 kubelet[2419]: E0416 01:06:51.685946 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:51.691728 containerd[1593]: time="2026-04-16T01:06:51.690897511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:73c4c341181648b233d54d97a9f2a6eb,Namespace:kube-system,Attempt:0,}" Apr 16 01:06:51.703920 kubelet[2419]: E0416 01:06:51.702999 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:51.706696 containerd[1593]: time="2026-04-16T01:06:51.705068049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 16 01:06:51.714996 kubelet[2419]: E0416 01:06:51.714839 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:51.718073 containerd[1593]: time="2026-04-16T01:06:51.717847420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 16 01:06:51.982860 kubelet[2419]: I0416 01:06:51.977978 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:06:51.985555 kubelet[2419]: E0416 01:06:51.984871 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Apr 16 01:06:51.985555 kubelet[2419]: E0416 01:06:51.985120 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:06:52.020004 kubelet[2419]: E0416 01:06:52.019738 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:06:52.062126 kubelet[2419]: E0416 01:06:52.061918 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:06:52.091132 kubelet[2419]: E0416 01:06:52.089837 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:06:52.482146 kubelet[2419]: E0416 01:06:52.480882 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="1.6s" Apr 16 01:06:52.528043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389603421.mount: Deactivated successfully. Apr 16 01:06:52.549055 containerd[1593]: time="2026-04-16T01:06:52.547825595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:06:52.556721 containerd[1593]: time="2026-04-16T01:06:52.556025383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 16 01:06:52.563903 containerd[1593]: time="2026-04-16T01:06:52.562875737Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:06:52.567974 containerd[1593]: time="2026-04-16T01:06:52.567782642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 01:06:52.572084 containerd[1593]: time="2026-04-16T01:06:52.572030297Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:06:52.574039 containerd[1593]: time="2026-04-16T01:06:52.573976771Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:06:52.576791 containerd[1593]: time="2026-04-16T01:06:52.576171666Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 01:06:52.582781 containerd[1593]: time="2026-04-16T01:06:52.581949409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:06:52.583072 containerd[1593]: time="2026-04-16T01:06:52.582834152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 877.707037ms" Apr 16 01:06:52.585183 containerd[1593]: time="2026-04-16T01:06:52.585067300Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 893.328338ms" Apr 16 01:06:52.590706 containerd[1593]: time="2026-04-16T01:06:52.589937047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 872.027695ms" Apr 16 01:06:52.803028 kubelet[2419]: I0416 01:06:52.801930 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:06:52.807737 kubelet[2419]: E0416 01:06:52.806666 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Apr 16 01:06:53.076102 kubelet[2419]: E0416 01:06:53.072156 2419 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 01:06:53.077876 containerd[1593]: time="2026-04-16T01:06:53.076170411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:06:53.080796 containerd[1593]: time="2026-04-16T01:06:53.079764163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:06:53.081730 containerd[1593]: time="2026-04-16T01:06:53.080498443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:06:53.081730 containerd[1593]: time="2026-04-16T01:06:53.080653155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:06:53.081730 containerd[1593]: time="2026-04-16T01:06:53.080763674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:06:53.082960 containerd[1593]: time="2026-04-16T01:06:53.081174707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:06:53.083957 containerd[1593]: time="2026-04-16T01:06:53.083686545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:06:53.086857 containerd[1593]: time="2026-04-16T01:06:53.086051981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:06:53.131686 containerd[1593]: time="2026-04-16T01:06:53.130756618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:06:53.131686 containerd[1593]: time="2026-04-16T01:06:53.131199502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:06:53.145801 containerd[1593]: time="2026-04-16T01:06:53.143953305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:06:53.145801 containerd[1593]: time="2026-04-16T01:06:53.144193088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:06:53.494158 containerd[1593]: time="2026-04-16T01:06:53.493746668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"6694f5e654b055da713264fde02f0131017498c25c8c40418d6292012cb28e91\"" Apr 16 01:06:53.519006 containerd[1593]: time="2026-04-16T01:06:53.518858288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:73c4c341181648b233d54d97a9f2a6eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f84386945d5b5507d03c96d3d9b8ac5c900ad7fe42e2ce502c6feb288961706\"" Apr 16 01:06:53.521866 kubelet[2419]: E0416 01:06:53.520719 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:53.526848 kubelet[2419]: E0416 01:06:53.525843 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:53.536828 containerd[1593]: time="2026-04-16T01:06:53.535899838Z" level=info msg="CreateContainer within sandbox \"6694f5e654b055da713264fde02f0131017498c25c8c40418d6292012cb28e91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 01:06:53.546104 containerd[1593]: time="2026-04-16T01:06:53.546078022Z" level=info msg="CreateContainer within sandbox \"5f84386945d5b5507d03c96d3d9b8ac5c900ad7fe42e2ce502c6feb288961706\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 01:06:53.592936 containerd[1593]: time="2026-04-16T01:06:53.592905521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"b889d21375f82adc7b4c58a12454a5261c0f6afdb3cf77f93e6e150f00123724\"" Apr 16 01:06:53.607822 kubelet[2419]: E0416 01:06:53.607063 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:53.619763 containerd[1593]: time="2026-04-16T01:06:53.618942671Z" level=info msg="CreateContainer within sandbox \"6694f5e654b055da713264fde02f0131017498c25c8c40418d6292012cb28e91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e03e47372f0fd9c1a8a846bc82c806bbec519ac632cbb247fe91dd64fe52c937\"" Apr 16 01:06:53.636794 containerd[1593]: time="2026-04-16T01:06:53.636128591Z" level=info msg="StartContainer for \"e03e47372f0fd9c1a8a846bc82c806bbec519ac632cbb247fe91dd64fe52c937\"" Apr 16 01:06:53.651844 containerd[1593]: time="2026-04-16T01:06:53.650815761Z" level=info msg="CreateContainer within sandbox \"5f84386945d5b5507d03c96d3d9b8ac5c900ad7fe42e2ce502c6feb288961706\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5a131ffce319e19cbebbb1511f1490cb420b19116db7334d0be1127ebab1ad18\"" Apr 16 01:06:53.654076 containerd[1593]: time="2026-04-16T01:06:53.653118481Z" level=info msg="StartContainer for \"5a131ffce319e19cbebbb1511f1490cb420b19116db7334d0be1127ebab1ad18\"" Apr 16 01:06:53.654076 containerd[1593]: time="2026-04-16T01:06:53.653921078Z" level=info msg="CreateContainer within sandbox \"b889d21375f82adc7b4c58a12454a5261c0f6afdb3cf77f93e6e150f00123724\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 01:06:53.735478 containerd[1593]: time="2026-04-16T01:06:53.734051687Z" level=info msg="CreateContainer within sandbox \"b889d21375f82adc7b4c58a12454a5261c0f6afdb3cf77f93e6e150f00123724\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc0d0fde5fb74e6fcf9481693c3ecdd0338aae74518e8eaebc4892e4b5261c2f\"" Apr 16 01:06:53.739975 containerd[1593]: time="2026-04-16T01:06:53.739713601Z" level=info msg="StartContainer for \"fc0d0fde5fb74e6fcf9481693c3ecdd0338aae74518e8eaebc4892e4b5261c2f\"" Apr 16 01:06:53.805207 kubelet[2419]: E0416 01:06:53.805003 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:06:54.089012 kubelet[2419]: E0416 01:06:54.087202 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="3.2s" Apr 16 01:06:54.090966 containerd[1593]: time="2026-04-16T01:06:54.090932862Z" level=info msg="StartContainer for \"e03e47372f0fd9c1a8a846bc82c806bbec519ac632cbb247fe91dd64fe52c937\" returns successfully" Apr 16 01:06:54.125725 kubelet[2419]: E0416 01:06:54.125547 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:06:54.421014 kubelet[2419]: I0416 01:06:54.414882 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:06:54.422898 kubelet[2419]: E0416 01:06:54.422869 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" Apr 16 01:06:54.427835 kubelet[2419]: E0416 01:06:54.427099 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:54.427835 kubelet[2419]: E0416 01:06:54.427781 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:54.513738 containerd[1593]: time="2026-04-16T01:06:54.508178272Z" level=info msg="StartContainer for \"5a131ffce319e19cbebbb1511f1490cb420b19116db7334d0be1127ebab1ad18\" returns successfully" Apr 16 01:06:54.553923 kubelet[2419]: E0416 01:06:54.553007 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:06:54.778210 kubelet[2419]: E0416 01:06:54.762977 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.62:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:06:54.862683 containerd[1593]: time="2026-04-16T01:06:54.861153643Z" level=info msg="StartContainer for \"fc0d0fde5fb74e6fcf9481693c3ecdd0338aae74518e8eaebc4892e4b5261c2f\" returns successfully" Apr 16 01:06:55.485913 kubelet[2419]: E0416 01:06:55.485080 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:55.485913 kubelet[2419]: E0416 01:06:55.485948 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:55.491943 kubelet[2419]: E0416 01:06:55.486836 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:55.491943 kubelet[2419]: E0416 01:06:55.486910 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:55.491943 kubelet[2419]: E0416 01:06:55.487059 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:55.491943 kubelet[2419]: E0416 01:06:55.487112 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:56.501886 kubelet[2419]: E0416 01:06:56.501166 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:56.501886 kubelet[2419]: E0416 01:06:56.501802 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:56.503035 kubelet[2419]: E0416 01:06:56.502750 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:56.503035 kubelet[2419]: E0416 01:06:56.502832 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:57.561650 kubelet[2419]: E0416 01:06:57.558803 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:57.561650 kubelet[2419]: E0416 01:06:57.559378 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:57.571016 kubelet[2419]: E0416 01:06:57.567080 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:57.571016 kubelet[2419]: E0416 01:06:57.567202 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:06:57.705877 kubelet[2419]: I0416 01:06:57.702888 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:06:58.371172 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1496668815 wd_nsec: 1496668320 Apr 16 01:06:58.529132 kubelet[2419]: E0416 01:06:58.528156 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:06:58.529687 kubelet[2419]: E0416 01:06:58.529658 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:01.255134 kubelet[2419]: E0416 01:07:01.246921 2419 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 01:07:04.689815 kubelet[2419]: E0416 01:07:04.689016 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:07:04.689815 kubelet[2419]: E0416 01:07:04.689831 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:07.298973 kubelet[2419]: E0416 01:07:07.298503 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Apr 16 01:07:07.407897 kubelet[2419]: E0416 01:07:07.400177 2419 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 01:07:07.722800 kubelet[2419]: E0416 01:07:07.719968 2419 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 16 01:07:07.722800 kubelet[2419]: E0416 01:07:07.722550 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 01:07:08.046204 kubelet[2419]: E0416 01:07:08.042609 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 01:07:10.070990 kubelet[2419]: E0416 01:07:10.070546 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 01:07:10.144092 kubelet[2419]: E0416 01:07:10.143058 2419 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 01:07:10.209540 kubelet[2419]: E0416 01:07:10.208673 2419 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.62:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a6b0e7697d4fea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 01:06:50.99799345 +0000 UTC m=+1.923820035,LastTimestamp:2026-04-16 01:06:50.99799345 +0000 UTC m=+1.923820035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 01:07:11.256050 kubelet[2419]: E0416 01:07:11.252073 2419 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 01:07:14.129472 kubelet[2419]: I0416 01:07:14.128081 2419 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:07:17.460968 kubelet[2419]: E0416 01:07:17.460629 2419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 16 01:07:17.757516 kubelet[2419]: I0416 01:07:17.754182 2419 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 01:07:17.757516 kubelet[2419]: E0416 01:07:17.756825 2419 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 01:07:17.881947 kubelet[2419]: E0416 01:07:17.881473 2419 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:07:17.881947 kubelet[2419]: E0416 01:07:17.881943 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:17.900454 kubelet[2419]: E0416 01:07:17.898456 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:18.006749 kubelet[2419]: E0416 01:07:18.005070 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:18.111476 kubelet[2419]: E0416 01:07:18.107970 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:18.210655 kubelet[2419]: E0416 01:07:18.210100 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:18.313599 kubelet[2419]: E0416 01:07:18.312783 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:18.415894 kubelet[2419]: E0416 01:07:18.414033 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:18.518077 kubelet[2419]: E0416 01:07:18.516752 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:18.620325 kubelet[2419]: E0416 01:07:18.618832 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:18.721740 kubelet[2419]: E0416 01:07:18.719752 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:18.821068 kubelet[2419]: E0416 01:07:18.820633 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:18.924691 kubelet[2419]: E0416 01:07:18.922994 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:19.028159 kubelet[2419]: E0416 01:07:19.023705 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:19.126410 kubelet[2419]: E0416 01:07:19.124613 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:19.227114 kubelet[2419]: E0416 01:07:19.226705 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:19.330921 kubelet[2419]: E0416 01:07:19.329944 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:19.432950 kubelet[2419]: E0416 01:07:19.431169 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:19.537127 kubelet[2419]: E0416 01:07:19.536773 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:19.657063 kubelet[2419]: E0416 01:07:19.637995 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:19.739572 kubelet[2419]: E0416 01:07:19.738900 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:19.841752 kubelet[2419]: E0416 01:07:19.840994 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:19.941910 kubelet[2419]: E0416 01:07:19.941639 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:20.051017 kubelet[2419]: E0416 01:07:20.050700 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:20.152128 kubelet[2419]: E0416 01:07:20.151566 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:20.253453 kubelet[2419]: E0416 01:07:20.253047 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:20.363010 kubelet[2419]: E0416 01:07:20.354113 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:20.462614 kubelet[2419]: E0416 01:07:20.461847 2419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:07:20.578649 kubelet[2419]: I0416 01:07:20.574859 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 01:07:20.760821 kubelet[2419]: I0416 01:07:20.754885 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:07:20.885785 kubelet[2419]: I0416 01:07:20.885077 2419 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 01:07:21.203763 kubelet[2419]: I0416 01:07:21.201561 2419 apiserver.go:52] "Watching apiserver" Apr 16 01:07:21.301923 kubelet[2419]: E0416 01:07:21.298833 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:21.301923 kubelet[2419]: E0416 01:07:21.298893 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:21.301923 kubelet[2419]: E0416 01:07:21.300812 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:21.361739 kubelet[2419]: I0416 01:07:21.358953 2419 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 01:07:28.414818 kubelet[2419]: E0416 01:07:28.414590 2419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:28.791650 kubelet[2419]: I0416 01:07:28.789030 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.789002566 podStartE2EDuration="8.789002566s" podCreationTimestamp="2026-04-16 01:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:07:28.72558593 +0000 UTC m=+39.651412526" watchObservedRunningTime="2026-04-16 01:07:28.789002566 +0000 UTC m=+39.714829153" Apr 16 01:07:28.807189 kubelet[2419]: I0416 01:07:28.806829 2419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.806799767 podStartE2EDuration="8.806799767s" podCreationTimestamp="2026-04-16 01:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:07:28.784033328 +0000 UTC m=+39.709859926" watchObservedRunningTime="2026-04-16 01:07:28.806799767 +0000 UTC m=+39.732626357" Apr 16 01:07:29.155159 systemd[1]: Reloading requested from client PID 2713 ('systemctl') (unit session-9.scope)... Apr 16 01:07:29.155503 systemd[1]: Reloading... Apr 16 01:07:29.629777 zram_generator::config[2752]: No configuration found. Apr 16 01:07:30.178624 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:07:30.436728 systemd[1]: Reloading finished in 1280 ms. Apr 16 01:07:30.548611 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:07:30.575883 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 01:07:30.580194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:07:30.601671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:07:31.235107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:07:31.274542 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 01:07:31.716855 kubelet[2807]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:07:31.716855 kubelet[2807]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 01:07:31.716855 kubelet[2807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:07:31.727085 kubelet[2807]: I0416 01:07:31.720099 2807 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 01:07:31.807721 kubelet[2807]: I0416 01:07:31.804003 2807 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 01:07:31.807721 kubelet[2807]: I0416 01:07:31.804479 2807 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 01:07:31.807721 kubelet[2807]: I0416 01:07:31.805709 2807 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 01:07:31.818677 kubelet[2807]: I0416 01:07:31.812634 2807 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 01:07:31.828604 kubelet[2807]: I0416 01:07:31.826441 2807 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 01:07:31.951703 kubelet[2807]: E0416 01:07:31.951411 2807 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 01:07:31.951703 kubelet[2807]: I0416 01:07:31.951536 2807 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 16 01:07:31.975479 kubelet[2807]: I0416 01:07:31.968131 2807 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 01:07:31.975479 kubelet[2807]: I0416 01:07:31.973656 2807 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 01:07:31.976164 kubelet[2807]: I0416 01:07:31.975032 2807 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 16 01:07:31.976164 kubelet[2807]: I0416 01:07:31.975883 2807 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 01:07:31.976164 kubelet[2807]: I0416 01:07:31.975896 2807 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 01:07:31.976767 kubelet[2807]: I0416 01:07:31.976642 2807 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:07:31.983675 kubelet[2807]: I0416 01:07:31.981178 2807 kubelet.go:480] "Attempting to sync node with API server" Apr 16 01:07:31.983675 kubelet[2807]: I0416 01:07:31.982097 2807 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 01:07:31.985783 kubelet[2807]: I0416 01:07:31.985612 2807 kubelet.go:386] "Adding apiserver pod source" Apr 16 01:07:31.985783 kubelet[2807]: I0416 01:07:31.985673 2807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 01:07:31.999678 kubelet[2807]: I0416 01:07:31.994178 2807 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 01:07:32.001860 kubelet[2807]: I0416 01:07:32.001699 2807 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 01:07:32.090696 kubelet[2807]: I0416 01:07:32.087814 2807 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 01:07:32.090696 kubelet[2807]: I0416 01:07:32.087851 2807 server.go:1289] "Started kubelet" Apr 16 01:07:32.092765 kubelet[2807]: I0416 01:07:32.092711 2807 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 01:07:32.103722 kubelet[2807]: I0416 01:07:32.103541 2807 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 01:07:32.139896 kubelet[2807]: I0416 01:07:32.138559 2807 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 01:07:32.155538 kubelet[2807]: I0416 01:07:32.150059 2807 server.go:317] "Adding debug handlers to kubelet server" Apr 16 01:07:32.160519 kubelet[2807]: I0416 01:07:32.158468 2807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 01:07:32.162099 kubelet[2807]: E0416 01:07:32.162008 2807 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 01:07:32.164011 kubelet[2807]: I0416 01:07:32.163874 2807 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 01:07:32.178569 kubelet[2807]: I0416 01:07:32.178074 2807 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 01:07:32.181002 kubelet[2807]: I0416 01:07:32.180602 2807 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 01:07:32.195094 kubelet[2807]: I0416 01:07:32.192486 2807 reconciler.go:26] "Reconciler: start to sync state" Apr 16 01:07:32.211891 kubelet[2807]: I0416 01:07:32.210935 2807 factory.go:223] Registration of the systemd container factory successfully Apr 16 01:07:32.211891 kubelet[2807]: I0416 01:07:32.211022 2807 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 01:07:32.221764 kubelet[2807]: I0416 01:07:32.220814 2807 factory.go:223] Registration of the containerd container factory successfully Apr 16 01:07:32.536095 kubelet[2807]: I0416 01:07:32.527053 2807 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 01:07:32.546154 kubelet[2807]: I0416 01:07:32.540579 2807 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 01:07:32.546154 kubelet[2807]: I0416 01:07:32.541184 2807 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 01:07:32.546154 kubelet[2807]: I0416 01:07:32.541398 2807 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 01:07:32.546154 kubelet[2807]: I0416 01:07:32.541408 2807 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 01:07:32.546154 kubelet[2807]: E0416 01:07:32.541540 2807 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 01:07:32.647934 kubelet[2807]: E0416 01:07:32.646204 2807 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 01:07:32.809958 kubelet[2807]: I0416 01:07:32.809493 2807 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 01:07:32.809958 kubelet[2807]: I0416 01:07:32.809574 2807 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 01:07:32.820909 kubelet[2807]: I0416 01:07:32.814213 2807 state_mem.go:36] "Initialized new in-memory state store" Apr 16 01:07:32.820909 kubelet[2807]: I0416 01:07:32.815452 2807 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 01:07:32.820909 kubelet[2807]: I0416 01:07:32.815463 2807 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 01:07:32.820909 kubelet[2807]: I0416 01:07:32.815479 2807 policy_none.go:49] "None policy: Start" Apr 16 01:07:32.820909 kubelet[2807]: I0416 01:07:32.815488 2807 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 01:07:32.820909 kubelet[2807]: I0416 01:07:32.815496 2807 state_mem.go:35] "Initializing new in-memory state store" Apr 16 01:07:32.820909 kubelet[2807]: I0416 01:07:32.815712 2807 state_mem.go:75] "Updated machine memory state" Apr 16 01:07:32.826177 kubelet[2807]: E0416 01:07:32.823993 2807 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 01:07:32.826177 kubelet[2807]: I0416 01:07:32.824199 2807 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 01:07:32.832513 kubelet[2807]: I0416 01:07:32.830408 2807 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 01:07:32.841854 kubelet[2807]: I0416 01:07:32.839930 2807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 01:07:32.856897 kubelet[2807]: I0416 01:07:32.854866 2807 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 01:07:32.867757 kubelet[2807]: I0416 01:07:32.864865 2807 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 01:07:32.867757 kubelet[2807]: I0416 01:07:32.865523 2807 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:07:32.903675 kubelet[2807]: E0416 01:07:32.903470 2807 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 01:07:32.921875 kubelet[2807]: E0416 01:07:32.920921 2807 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 01:07:32.932582 kubelet[2807]: E0416 01:07:32.930037 2807 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:07:32.932582 kubelet[2807]: E0416 01:07:32.930046 2807 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 16 01:07:32.983013 kubelet[2807]: I0416 01:07:32.976169 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73c4c341181648b233d54d97a9f2a6eb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"73c4c341181648b233d54d97a9f2a6eb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:07:32.983013 kubelet[2807]: I0416 01:07:32.976639 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:07:32.983013 kubelet[2807]: I0416 01:07:32.976657 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:07:32.983013 kubelet[2807]: I0416 01:07:32.976910 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:07:32.983013 kubelet[2807]: I0416 01:07:32.976925 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:07:32.992545 kubelet[2807]: I0416 01:07:32.977068 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73c4c341181648b233d54d97a9f2a6eb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"73c4c341181648b233d54d97a9f2a6eb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:07:32.992545 kubelet[2807]: I0416 01:07:32.977080 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73c4c341181648b233d54d97a9f2a6eb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"73c4c341181648b233d54d97a9f2a6eb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:07:32.992545 kubelet[2807]: I0416 01:07:32.977092 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:07:32.992545 kubelet[2807]: I0416 01:07:32.977105 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 16 01:07:32.992545 kubelet[2807]: I0416 01:07:32.992520 2807 apiserver.go:52] "Watching apiserver" Apr 16 01:07:33.033755 kubelet[2807]: I0416 01:07:33.033674 2807 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 01:07:33.084912 kubelet[2807]: I0416 01:07:33.084423 2807 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 01:07:33.115147 kubelet[2807]: I0416 01:07:33.111160 2807 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 01:07:33.115147 kubelet[2807]: I0416 01:07:33.111868 2807 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 01:07:33.223677 kubelet[2807]: E0416 01:07:33.223002 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:33.232462 kubelet[2807]: E0416 01:07:33.230925 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:33.232462 kubelet[2807]: E0416 01:07:33.231093 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:33.702113 kubelet[2807]: E0416 01:07:33.701971 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:33.713403 kubelet[2807]: E0416 01:07:33.704728 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:33.732913 kubelet[2807]: E0416 01:07:33.726143 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:34.731686 kubelet[2807]: E0416 01:07:34.729582 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:34.834813 kubelet[2807]: E0416 01:07:34.833513 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:35.264559 kubelet[2807]: I0416 01:07:35.263458 2807 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 01:07:35.273462 containerd[1593]: time="2026-04-16T01:07:35.271749717Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 01:07:35.285051 kubelet[2807]: I0416 01:07:35.284736 2807 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 01:07:37.507659 kubelet[2807]: I0416 01:07:37.503980 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a2dc1b6-5e26-4360-a5ba-352545e405d1-xtables-lock\") pod \"kube-proxy-pbb74\" (UID: \"3a2dc1b6-5e26-4360-a5ba-352545e405d1\") " pod="kube-system/kube-proxy-pbb74" Apr 16 01:07:37.526703 kubelet[2807]: I0416 01:07:37.526485 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a2dc1b6-5e26-4360-a5ba-352545e405d1-lib-modules\") pod \"kube-proxy-pbb74\" (UID: \"3a2dc1b6-5e26-4360-a5ba-352545e405d1\") " pod="kube-system/kube-proxy-pbb74" Apr 16 01:07:37.533489 kubelet[2807]: I0416 01:07:37.532759 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmvkh\" (UniqueName: \"kubernetes.io/projected/3a2dc1b6-5e26-4360-a5ba-352545e405d1-kube-api-access-nmvkh\") pod \"kube-proxy-pbb74\" (UID: \"3a2dc1b6-5e26-4360-a5ba-352545e405d1\") " pod="kube-system/kube-proxy-pbb74" Apr 16 01:07:37.533489 kubelet[2807]: I0416 01:07:37.532986 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a2dc1b6-5e26-4360-a5ba-352545e405d1-kube-proxy\") pod \"kube-proxy-pbb74\" (UID: \"3a2dc1b6-5e26-4360-a5ba-352545e405d1\") " pod="kube-system/kube-proxy-pbb74" Apr 16 01:07:37.676680 kubelet[2807]: E0416 01:07:37.673866 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:37.919438 kubelet[2807]: E0416 01:07:37.918469 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:37.978121 kubelet[2807]: E0416 01:07:37.977428 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:37.985927 containerd[1593]: time="2026-04-16T01:07:37.983425814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbb74,Uid:3a2dc1b6-5e26-4360-a5ba-352545e405d1,Namespace:kube-system,Attempt:0,}" Apr 16 01:07:38.566665 containerd[1593]: time="2026-04-16T01:07:38.556801364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:07:38.570529 containerd[1593]: time="2026-04-16T01:07:38.569600465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:07:38.570529 containerd[1593]: time="2026-04-16T01:07:38.569632659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:07:38.570529 containerd[1593]: time="2026-04-16T01:07:38.569815154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:07:38.776737 systemd[1]: run-containerd-runc-k8s.io-606b4c9e2c7afa5b5f7be4c0be962e416117e70f3afe4ef6076a088b61c99a4c-runc.kCWDm5.mount: Deactivated successfully. Apr 16 01:07:39.170456 containerd[1593]: time="2026-04-16T01:07:39.167858530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbb74,Uid:3a2dc1b6-5e26-4360-a5ba-352545e405d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"606b4c9e2c7afa5b5f7be4c0be962e416117e70f3afe4ef6076a088b61c99a4c\"" Apr 16 01:07:39.175027 kubelet[2807]: E0416 01:07:39.172522 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:39.240497 containerd[1593]: time="2026-04-16T01:07:39.237553107Z" level=info msg="CreateContainer within sandbox \"606b4c9e2c7afa5b5f7be4c0be962e416117e70f3afe4ef6076a088b61c99a4c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 01:07:39.258919 kubelet[2807]: E0416 01:07:39.258559 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:39.384869 containerd[1593]: time="2026-04-16T01:07:39.384468997Z" level=info msg="CreateContainer within sandbox \"606b4c9e2c7afa5b5f7be4c0be962e416117e70f3afe4ef6076a088b61c99a4c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2cbe3c6d0ffad685172f6311cae081b5bc0aa113adbc7e12b69b926bc71f3908\"" Apr 16 01:07:39.436976 containerd[1593]: time="2026-04-16T01:07:39.433776274Z" level=info msg="StartContainer for \"2cbe3c6d0ffad685172f6311cae081b5bc0aa113adbc7e12b69b926bc71f3908\"" Apr 16 01:07:40.147868 kubelet[2807]: E0416 01:07:40.144791 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:41.126041 containerd[1593]: time="2026-04-16T01:07:41.125815727Z" level=info msg="StartContainer for \"2cbe3c6d0ffad685172f6311cae081b5bc0aa113adbc7e12b69b926bc71f3908\" returns successfully" Apr 16 01:07:41.495797 kubelet[2807]: I0416 01:07:41.488625 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3ae72145-447e-4f8f-ad8a-c2a0c95376d5-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-phx9b\" (UID: \"3ae72145-447e-4f8f-ad8a-c2a0c95376d5\") " pod="tigera-operator/tigera-operator-6bf85f8dd-phx9b" Apr 16 01:07:41.495797 kubelet[2807]: I0416 01:07:41.488676 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttr8\" (UniqueName: \"kubernetes.io/projected/3ae72145-447e-4f8f-ad8a-c2a0c95376d5-kube-api-access-8ttr8\") pod \"tigera-operator-6bf85f8dd-phx9b\" (UID: \"3ae72145-447e-4f8f-ad8a-c2a0c95376d5\") " pod="tigera-operator/tigera-operator-6bf85f8dd-phx9b" Apr 16 01:07:41.626505 kubelet[2807]: E0416 01:07:41.623578 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:41.804016 containerd[1593]: time="2026-04-16T01:07:41.790063990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-phx9b,Uid:3ae72145-447e-4f8f-ad8a-c2a0c95376d5,Namespace:tigera-operator,Attempt:0,}" Apr 16 01:07:41.999856 kubelet[2807]: E0416 01:07:41.999832 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:42.001411 kubelet[2807]: E0416 01:07:41.999840 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:42.044906 containerd[1593]: time="2026-04-16T01:07:42.040627976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:07:42.044906 containerd[1593]: time="2026-04-16T01:07:42.041499503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:07:42.044906 containerd[1593]: time="2026-04-16T01:07:42.041508635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:07:42.044906 containerd[1593]: time="2026-04-16T01:07:42.043438115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:07:42.206389 kubelet[2807]: I0416 01:07:42.205019 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pbb74" podStartSLOduration=7.204994431 podStartE2EDuration="7.204994431s" podCreationTimestamp="2026-04-16 01:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:07:42.204479357 +0000 UTC m=+10.875909319" watchObservedRunningTime="2026-04-16 01:07:42.204994431 +0000 UTC m=+10.876424391" Apr 16 01:07:42.638525 containerd[1593]: time="2026-04-16T01:07:42.636516526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-phx9b,Uid:3ae72145-447e-4f8f-ad8a-c2a0c95376d5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"934425e6cf513f1e4d0e635d0439315f355361c04ea5dce752ca838bdfa28cae\"" Apr 16 01:07:42.678081 containerd[1593]: time="2026-04-16T01:07:42.674771417Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 01:07:43.015877 kubelet[2807]: E0416 01:07:43.012196 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:07:44.595400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693189868.mount: Deactivated successfully. Apr 16 01:07:49.018824 containerd[1593]: time="2026-04-16T01:07:49.017653958Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:07:49.022784 containerd[1593]: time="2026-04-16T01:07:49.021909020Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 01:07:49.028861 containerd[1593]: time="2026-04-16T01:07:49.028481911Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:07:49.038501 containerd[1593]: time="2026-04-16T01:07:49.038053179Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:07:49.044637 containerd[1593]: time="2026-04-16T01:07:49.042394639Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 6.367308022s" Apr 16 01:07:49.044637 containerd[1593]: time="2026-04-16T01:07:49.042431384Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 01:07:49.065632 containerd[1593]: time="2026-04-16T01:07:49.064753476Z" level=info msg="CreateContainer within sandbox \"934425e6cf513f1e4d0e635d0439315f355361c04ea5dce752ca838bdfa28cae\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 01:07:49.138029 containerd[1593]: time="2026-04-16T01:07:49.137776045Z" level=info msg="CreateContainer within sandbox \"934425e6cf513f1e4d0e635d0439315f355361c04ea5dce752ca838bdfa28cae\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"32cdb1eeb596e4ec552a48fc55d64d339c87e7cacd6a487138e5b51528216722\"" Apr 16 01:07:49.158794 containerd[1593]: time="2026-04-16T01:07:49.158689854Z" level=info msg="StartContainer for \"32cdb1eeb596e4ec552a48fc55d64d339c87e7cacd6a487138e5b51528216722\"" Apr 16 01:07:49.605946 containerd[1593]: time="2026-04-16T01:07:49.605537137Z" level=info msg="StartContainer for \"32cdb1eeb596e4ec552a48fc55d64d339c87e7cacd6a487138e5b51528216722\" returns successfully" Apr 16 01:07:50.146868 kubelet[2807]: I0416 01:07:50.146000 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-phx9b" podStartSLOduration=2.757868191 podStartE2EDuration="9.14598241s" podCreationTimestamp="2026-04-16 01:07:41 +0000 UTC" firstStartedPulling="2026-04-16 01:07:42.665943787 +0000 UTC m=+11.337373729" lastFinishedPulling="2026-04-16 01:07:49.054057997 +0000 UTC m=+17.725487948" observedRunningTime="2026-04-16 01:07:50.140839295 +0000 UTC m=+18.812269242" watchObservedRunningTime="2026-04-16 01:07:50.14598241 +0000 UTC m=+18.817412368" Apr 16 01:08:08.258945 sudo[1819]: pam_unix(sudo:session): session closed for user root Apr 16 01:08:08.277558 sshd[1815]: pam_unix(sshd:session): session closed for user core Apr 16 01:08:08.316872 systemd[1]: sshd@8-10.0.0.62:22-10.0.0.1:33128.service: Deactivated successfully. Apr 16 01:08:08.356049 systemd-logind[1572]: Session 9 logged out. Waiting for processes to exit. Apr 16 01:08:08.357645 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 01:08:08.364868 systemd-logind[1572]: Removed session 9. Apr 16 01:08:24.941032 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:08:24.883940 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:08:24.884107 systemd-resolved[1467]: Flushed all caches. Apr 16 01:08:26.932666 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:08:26.902697 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:08:26.902705 systemd-resolved[1467]: Flushed all caches. Apr 16 01:08:37.404931 kubelet[2807]: I0416 01:08:37.403112 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65d1943c-3694-4b67-abce-aefb61e908fc-tigera-ca-bundle\") pod \"calico-typha-7f78fffc4f-mwfvh\" (UID: \"65d1943c-3694-4b67-abce-aefb61e908fc\") " pod="calico-system/calico-typha-7f78fffc4f-mwfvh" Apr 16 01:08:37.404931 kubelet[2807]: I0416 01:08:37.404700 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsh24\" (UniqueName: \"kubernetes.io/projected/65d1943c-3694-4b67-abce-aefb61e908fc-kube-api-access-lsh24\") pod \"calico-typha-7f78fffc4f-mwfvh\" (UID: \"65d1943c-3694-4b67-abce-aefb61e908fc\") " pod="calico-system/calico-typha-7f78fffc4f-mwfvh" Apr 16 01:08:37.404931 kubelet[2807]: I0416 01:08:37.404721 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/65d1943c-3694-4b67-abce-aefb61e908fc-typha-certs\") pod \"calico-typha-7f78fffc4f-mwfvh\" (UID: \"65d1943c-3694-4b67-abce-aefb61e908fc\") " pod="calico-system/calico-typha-7f78fffc4f-mwfvh" Apr 16 01:08:38.054642 kubelet[2807]: E0416 01:08:38.053915 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:38.072802 containerd[1593]: time="2026-04-16T01:08:38.071990675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f78fffc4f-mwfvh,Uid:65d1943c-3694-4b67-abce-aefb61e908fc,Namespace:calico-system,Attempt:0,}" Apr 16 01:08:38.810973 kubelet[2807]: I0416 01:08:38.800721 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-cni-bin-dir\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.852860 kubelet[2807]: I0416 01:08:38.849059 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-lib-modules\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.866686 kubelet[2807]: I0416 01:08:38.865750 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-var-run-calico\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.866686 kubelet[2807]: I0416 01:08:38.865831 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-policysync\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.866686 kubelet[2807]: I0416 01:08:38.865847 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d691048-cdb0-4720-ba43-c91642d909e1-tigera-ca-bundle\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.866686 kubelet[2807]: I0416 01:08:38.865863 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-nodeproc\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.866686 kubelet[2807]: I0416 01:08:38.865882 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-bpffs\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.867764 kubelet[2807]: I0416 01:08:38.865898 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-var-lib-calico\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.867764 kubelet[2807]: I0416 01:08:38.865913 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-flexvol-driver-host\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.867764 kubelet[2807]: I0416 01:08:38.865933 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-cni-net-dir\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.867764 kubelet[2807]: I0416 01:08:38.865947 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9vh\" (UniqueName: \"kubernetes.io/projected/8d691048-cdb0-4720-ba43-c91642d909e1-kube-api-access-7h9vh\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.867764 kubelet[2807]: I0416 01:08:38.865963 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-xtables-lock\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.868108 kubelet[2807]: I0416 01:08:38.865977 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-sys-fs\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.877990 kubelet[2807]: I0416 01:08:38.865990 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8d691048-cdb0-4720-ba43-c91642d909e1-node-certs\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.882702 kubelet[2807]: I0416 01:08:38.880178 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8d691048-cdb0-4720-ba43-c91642d909e1-cni-log-dir\") pod \"calico-node-l9vc6\" (UID: \"8d691048-cdb0-4720-ba43-c91642d909e1\") " pod="calico-system/calico-node-l9vc6" Apr 16 01:08:38.897892 kubelet[2807]: E0416 01:08:38.896986 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:08:38.937193 containerd[1593]: time="2026-04-16T01:08:38.913008888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:08:38.937193 containerd[1593]: time="2026-04-16T01:08:38.913187860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:08:38.937193 containerd[1593]: time="2026-04-16T01:08:38.913200564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:08:38.946856 containerd[1593]: time="2026-04-16T01:08:38.942051160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:08:38.999705 kubelet[2807]: I0416 01:08:38.984193 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73d74924-8e40-46ed-8ff0-31c0cdbb144c-kubelet-dir\") pod \"csi-node-driver-gqrfc\" (UID: \"73d74924-8e40-46ed-8ff0-31c0cdbb144c\") " pod="calico-system/csi-node-driver-gqrfc" Apr 16 01:08:38.999705 kubelet[2807]: I0416 01:08:38.985030 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/73d74924-8e40-46ed-8ff0-31c0cdbb144c-socket-dir\") pod \"csi-node-driver-gqrfc\" (UID: \"73d74924-8e40-46ed-8ff0-31c0cdbb144c\") " pod="calico-system/csi-node-driver-gqrfc" Apr 16 01:08:38.999705 kubelet[2807]: I0416 01:08:38.985088 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/73d74924-8e40-46ed-8ff0-31c0cdbb144c-registration-dir\") pod \"csi-node-driver-gqrfc\" (UID: \"73d74924-8e40-46ed-8ff0-31c0cdbb144c\") " pod="calico-system/csi-node-driver-gqrfc" Apr 16 01:08:38.999705 kubelet[2807]: I0416 01:08:38.985103 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/73d74924-8e40-46ed-8ff0-31c0cdbb144c-varrun\") pod \"csi-node-driver-gqrfc\" (UID: \"73d74924-8e40-46ed-8ff0-31c0cdbb144c\") " pod="calico-system/csi-node-driver-gqrfc" Apr 16 01:08:38.999705 kubelet[2807]: I0416 01:08:38.985122 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lctqx\" (UniqueName: \"kubernetes.io/projected/73d74924-8e40-46ed-8ff0-31c0cdbb144c-kube-api-access-lctqx\") pod \"csi-node-driver-gqrfc\" (UID: \"73d74924-8e40-46ed-8ff0-31c0cdbb144c\") " pod="calico-system/csi-node-driver-gqrfc" Apr 16 01:08:39.163163 kubelet[2807]: E0416 01:08:39.147045 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.163163 kubelet[2807]: W0416 01:08:39.147179 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.163163 kubelet[2807]: E0416 01:08:39.147718 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.219157 kubelet[2807]: E0416 01:08:39.209763 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.219157 kubelet[2807]: W0416 01:08:39.209913 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.219157 kubelet[2807]: E0416 01:08:39.209942 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.239800 kubelet[2807]: E0416 01:08:39.233096 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.239800 kubelet[2807]: W0416 01:08:39.233123 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.239800 kubelet[2807]: E0416 01:08:39.233151 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.284914 kubelet[2807]: E0416 01:08:39.282735 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.284914 kubelet[2807]: W0416 01:08:39.282764 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.284914 kubelet[2807]: E0416 01:08:39.282796 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.291837 kubelet[2807]: E0416 01:08:39.289859 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.301883 kubelet[2807]: W0416 01:08:39.298089 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.301883 kubelet[2807]: E0416 01:08:39.298120 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.327774 kubelet[2807]: E0416 01:08:39.326912 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.327774 kubelet[2807]: W0416 01:08:39.326973 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.327774 kubelet[2807]: E0416 01:08:39.327167 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.331839 kubelet[2807]: E0416 01:08:39.328831 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.331839 kubelet[2807]: W0416 01:08:39.328847 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.331839 kubelet[2807]: E0416 01:08:39.328863 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.362075 kubelet[2807]: E0416 01:08:39.361083 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.362075 kubelet[2807]: W0416 01:08:39.361154 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.362075 kubelet[2807]: E0416 01:08:39.361182 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.363857 kubelet[2807]: E0416 01:08:39.363806 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.363857 kubelet[2807]: W0416 01:08:39.363824 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.363857 kubelet[2807]: E0416 01:08:39.363842 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.368118 kubelet[2807]: E0416 01:08:39.366058 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.368118 kubelet[2807]: W0416 01:08:39.366071 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.368118 kubelet[2807]: E0416 01:08:39.366082 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.393716 kubelet[2807]: E0416 01:08:39.383198 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.393827 kubelet[2807]: W0416 01:08:39.393204 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.401024 kubelet[2807]: E0416 01:08:39.400854 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.401886 kubelet[2807]: E0416 01:08:39.401870 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.403155 kubelet[2807]: W0416 01:08:39.403014 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.403155 kubelet[2807]: E0416 01:08:39.403045 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.407912 kubelet[2807]: E0416 01:08:39.403978 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.407912 kubelet[2807]: W0416 01:08:39.403990 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.407912 kubelet[2807]: E0416 01:08:39.404002 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.412197 kubelet[2807]: E0416 01:08:39.411863 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.412197 kubelet[2807]: W0416 01:08:39.411874 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.412197 kubelet[2807]: E0416 01:08:39.411884 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.438836 kubelet[2807]: E0416 01:08:39.436728 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.438836 kubelet[2807]: W0416 01:08:39.436922 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.438836 kubelet[2807]: E0416 01:08:39.436963 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.441948 kubelet[2807]: E0416 01:08:39.441934 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.442104 kubelet[2807]: W0416 01:08:39.442009 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.442104 kubelet[2807]: E0416 01:08:39.442022 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.457791 kubelet[2807]: E0416 01:08:39.457134 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.457791 kubelet[2807]: W0416 01:08:39.457779 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.457791 kubelet[2807]: E0416 01:08:39.457796 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.471931 kubelet[2807]: E0416 01:08:39.471059 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.471931 kubelet[2807]: W0416 01:08:39.471699 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.471931 kubelet[2807]: E0416 01:08:39.471740 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.472922 kubelet[2807]: E0416 01:08:39.472142 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.472922 kubelet[2807]: W0416 01:08:39.472150 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.472922 kubelet[2807]: E0416 01:08:39.472158 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.484889 kubelet[2807]: E0416 01:08:39.483194 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.484889 kubelet[2807]: W0416 01:08:39.483956 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.484889 kubelet[2807]: E0416 01:08:39.484048 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.489750 kubelet[2807]: E0416 01:08:39.488045 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.489750 kubelet[2807]: W0416 01:08:39.488061 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.489750 kubelet[2807]: E0416 01:08:39.488080 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.501833 kubelet[2807]: E0416 01:08:39.499785 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.501833 kubelet[2807]: W0416 01:08:39.499813 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.501833 kubelet[2807]: E0416 01:08:39.499832 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.501833 kubelet[2807]: E0416 01:08:39.501036 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.501833 kubelet[2807]: W0416 01:08:39.501048 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.501833 kubelet[2807]: E0416 01:08:39.501060 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.502889 kubelet[2807]: E0416 01:08:39.501995 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.502889 kubelet[2807]: W0416 01:08:39.502006 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.502889 kubelet[2807]: E0416 01:08:39.502017 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.502889 kubelet[2807]: E0416 01:08:39.502739 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.502889 kubelet[2807]: W0416 01:08:39.502747 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.502889 kubelet[2807]: E0416 01:08:39.502756 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.504781 kubelet[2807]: E0416 01:08:39.503053 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.504781 kubelet[2807]: W0416 01:08:39.503204 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.504781 kubelet[2807]: E0416 01:08:39.503689 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.506822 kubelet[2807]: E0416 01:08:39.506796 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.506822 kubelet[2807]: W0416 01:08:39.506808 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.506822 kubelet[2807]: E0416 01:08:39.506816 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.583860 containerd[1593]: time="2026-04-16T01:08:39.583083934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l9vc6,Uid:8d691048-cdb0-4720-ba43-c91642d909e1,Namespace:calico-system,Attempt:0,}" Apr 16 01:08:39.634706 kubelet[2807]: E0416 01:08:39.630914 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:39.634706 kubelet[2807]: W0416 01:08:39.631031 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:39.634706 kubelet[2807]: E0416 01:08:39.631070 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:39.945758 containerd[1593]: time="2026-04-16T01:08:39.945706959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f78fffc4f-mwfvh,Uid:65d1943c-3694-4b67-abce-aefb61e908fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"20606917a4f7f01cdd20bb2650083f1229f55eb53ef691ff306629d6c202e379\"" Apr 16 01:08:39.961770 kubelet[2807]: E0416 01:08:39.961746 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:39.978684 containerd[1593]: time="2026-04-16T01:08:39.977494955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 01:08:40.049765 containerd[1593]: time="2026-04-16T01:08:40.017752582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:08:40.049765 containerd[1593]: time="2026-04-16T01:08:40.046166044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:08:40.092548 containerd[1593]: time="2026-04-16T01:08:40.069917815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:08:40.109750 containerd[1593]: time="2026-04-16T01:08:40.093916332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:08:40.349923 systemd[1]: run-containerd-runc-k8s.io-f5744337ae5a2ac90ce606166fbde12dc2970281636edfca6c69a1589d756bbd-runc.QAfEFd.mount: Deactivated successfully. Apr 16 01:08:40.611687 kubelet[2807]: E0416 01:08:40.610010 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:08:40.763021 containerd[1593]: time="2026-04-16T01:08:40.760994334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l9vc6,Uid:8d691048-cdb0-4720-ba43-c91642d909e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5744337ae5a2ac90ce606166fbde12dc2970281636edfca6c69a1589d756bbd\"" Apr 16 01:08:42.570739 kubelet[2807]: E0416 01:08:42.568092 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:08:42.607092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329765354.mount: Deactivated successfully. Apr 16 01:08:44.556201 kubelet[2807]: E0416 01:08:44.555831 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:08:46.549908 kubelet[2807]: E0416 01:08:46.549734 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:08:48.393567 containerd[1593]: time="2026-04-16T01:08:48.390667300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:08:48.395848 containerd[1593]: time="2026-04-16T01:08:48.395711051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 16 01:08:48.413097 containerd[1593]: time="2026-04-16T01:08:48.412902317Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:08:48.430395 containerd[1593]: time="2026-04-16T01:08:48.429888913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:08:48.431055 containerd[1593]: time="2026-04-16T01:08:48.430886157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 8.451808573s" Apr 16 01:08:48.431093 containerd[1593]: time="2026-04-16T01:08:48.431063193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 16 01:08:48.442680 containerd[1593]: time="2026-04-16T01:08:48.441892084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 01:08:48.547988 kubelet[2807]: E0416 01:08:48.544775 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:08:48.549019 containerd[1593]: time="2026-04-16T01:08:48.545709290Z" level=info msg="CreateContainer within sandbox \"20606917a4f7f01cdd20bb2650083f1229f55eb53ef691ff306629d6c202e379\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 01:08:48.636969 containerd[1593]: time="2026-04-16T01:08:48.636202755Z" level=info msg="CreateContainer within sandbox \"20606917a4f7f01cdd20bb2650083f1229f55eb53ef691ff306629d6c202e379\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ee978f0bd131d75f42c5e9bdaa30638d2d9e9a2fb9d50da6fe01c2bdf02e03d7\"" Apr 16 01:08:48.642691 containerd[1593]: time="2026-04-16T01:08:48.640139645Z" level=info msg="StartContainer for \"ee978f0bd131d75f42c5e9bdaa30638d2d9e9a2fb9d50da6fe01c2bdf02e03d7\"" Apr 16 01:08:49.264612 containerd[1593]: time="2026-04-16T01:08:49.252996615Z" level=info msg="StartContainer for \"ee978f0bd131d75f42c5e9bdaa30638d2d9e9a2fb9d50da6fe01c2bdf02e03d7\" returns successfully" Apr 16 01:08:49.493941 systemd[1]: run-containerd-runc-k8s.io-ee978f0bd131d75f42c5e9bdaa30638d2d9e9a2fb9d50da6fe01c2bdf02e03d7-runc.lLYJat.mount: Deactivated successfully. Apr 16 01:08:49.511862 kubelet[2807]: E0416 01:08:49.510696 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:49.564064 kubelet[2807]: E0416 01:08:49.563184 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.564064 kubelet[2807]: W0416 01:08:49.563711 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.564064 kubelet[2807]: E0416 01:08:49.563831 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.574148 kubelet[2807]: E0416 01:08:49.574120 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.574853 kubelet[2807]: W0416 01:08:49.574640 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.574853 kubelet[2807]: E0416 01:08:49.574717 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.578005 kubelet[2807]: E0416 01:08:49.577931 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.578005 kubelet[2807]: W0416 01:08:49.577945 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.578005 kubelet[2807]: E0416 01:08:49.577959 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.578988 kubelet[2807]: E0416 01:08:49.578887 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.578988 kubelet[2807]: W0416 01:08:49.578897 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.578988 kubelet[2807]: E0416 01:08:49.578907 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.583606 kubelet[2807]: E0416 01:08:49.583142 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.586097 kubelet[2807]: W0416 01:08:49.584117 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.586097 kubelet[2807]: E0416 01:08:49.584172 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.594552 kubelet[2807]: E0416 01:08:49.589933 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.594552 kubelet[2807]: W0416 01:08:49.589953 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.594552 kubelet[2807]: E0416 01:08:49.590008 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.609778 kubelet[2807]: E0416 01:08:49.606940 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.609778 kubelet[2807]: W0416 01:08:49.607181 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.626017 kubelet[2807]: E0416 01:08:49.625979 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.629795 kubelet[2807]: E0416 01:08:49.629200 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.630031 kubelet[2807]: W0416 01:08:49.630007 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.636187 kubelet[2807]: E0416 01:08:49.633952 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.639781 kubelet[2807]: E0416 01:08:49.638972 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.639781 kubelet[2807]: W0416 01:08:49.638993 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.639781 kubelet[2807]: E0416 01:08:49.639018 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.640157 kubelet[2807]: E0416 01:08:49.640145 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.640815 kubelet[2807]: W0416 01:08:49.640798 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.640893 kubelet[2807]: E0416 01:08:49.640882 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.641700 kubelet[2807]: E0416 01:08:49.641686 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.641771 kubelet[2807]: W0416 01:08:49.641761 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.641817 kubelet[2807]: E0416 01:08:49.641808 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.643671 kubelet[2807]: E0416 01:08:49.643658 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.643949 kubelet[2807]: W0416 01:08:49.643937 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.644008 kubelet[2807]: E0416 01:08:49.644001 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.653162 kubelet[2807]: E0416 01:08:49.651190 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.653162 kubelet[2807]: W0416 01:08:49.652006 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.653162 kubelet[2807]: E0416 01:08:49.653195 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.725720 kubelet[2807]: E0416 01:08:49.722971 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.725720 kubelet[2807]: W0416 01:08:49.723038 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.725720 kubelet[2807]: E0416 01:08:49.723068 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.730999 kubelet[2807]: E0416 01:08:49.729936 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.730999 kubelet[2807]: W0416 01:08:49.730864 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.733607 kubelet[2807]: E0416 01:08:49.730938 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.764024 kubelet[2807]: E0416 01:08:49.760708 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.764024 kubelet[2807]: W0416 01:08:49.760730 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.764024 kubelet[2807]: E0416 01:08:49.760792 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.764024 kubelet[2807]: E0416 01:08:49.761021 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.764024 kubelet[2807]: W0416 01:08:49.761029 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.764024 kubelet[2807]: E0416 01:08:49.761039 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.764024 kubelet[2807]: E0416 01:08:49.761199 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.764024 kubelet[2807]: W0416 01:08:49.761207 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.764024 kubelet[2807]: E0416 01:08:49.761599 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.764024 kubelet[2807]: E0416 01:08:49.761797 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.767678 kubelet[2807]: W0416 01:08:49.761806 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.767678 kubelet[2807]: E0416 01:08:49.761815 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.767678 kubelet[2807]: E0416 01:08:49.763814 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.767678 kubelet[2807]: W0416 01:08:49.763827 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.767678 kubelet[2807]: E0416 01:08:49.763838 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.776061 kubelet[2807]: E0416 01:08:49.775899 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.776061 kubelet[2807]: W0416 01:08:49.776048 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.776061 kubelet[2807]: E0416 01:08:49.776067 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.778811 kubelet[2807]: E0416 01:08:49.778665 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.778811 kubelet[2807]: W0416 01:08:49.778802 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.778899 kubelet[2807]: E0416 01:08:49.778818 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.781816 kubelet[2807]: E0416 01:08:49.781088 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.781816 kubelet[2807]: W0416 01:08:49.781797 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.781816 kubelet[2807]: E0416 01:08:49.781813 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.791976 kubelet[2807]: E0416 01:08:49.790877 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.791976 kubelet[2807]: W0416 01:08:49.791020 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.791976 kubelet[2807]: E0416 01:08:49.791037 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.799644 kubelet[2807]: E0416 01:08:49.798941 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.799644 kubelet[2807]: W0416 01:08:49.798959 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.799644 kubelet[2807]: E0416 01:08:49.798975 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.801964 kubelet[2807]: E0416 01:08:49.801819 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.802663 kubelet[2807]: W0416 01:08:49.801953 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.802663 kubelet[2807]: E0416 01:08:49.802629 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.806593 kubelet[2807]: E0416 01:08:49.805606 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.806593 kubelet[2807]: W0416 01:08:49.805742 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.806593 kubelet[2807]: E0416 01:08:49.805755 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.813604 kubelet[2807]: E0416 01:08:49.811170 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.813604 kubelet[2807]: W0416 01:08:49.811901 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.813604 kubelet[2807]: E0416 01:08:49.811973 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.823622 kubelet[2807]: E0416 01:08:49.814802 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.823622 kubelet[2807]: W0416 01:08:49.814815 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.823622 kubelet[2807]: E0416 01:08:49.814824 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.823622 kubelet[2807]: E0416 01:08:49.816855 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.823622 kubelet[2807]: W0416 01:08:49.816864 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.823622 kubelet[2807]: E0416 01:08:49.816872 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.848165 kubelet[2807]: E0416 01:08:49.830208 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.848165 kubelet[2807]: W0416 01:08:49.830655 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.848165 kubelet[2807]: E0416 01:08:49.830707 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.858869 kubelet[2807]: E0416 01:08:49.858652 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.858869 kubelet[2807]: W0416 01:08:49.858756 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.858869 kubelet[2807]: E0416 01:08:49.858779 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:49.869957 kubelet[2807]: E0416 01:08:49.868145 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:49.869957 kubelet[2807]: W0416 01:08:49.868641 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:49.869957 kubelet[2807]: E0416 01:08:49.868693 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.613708 kubelet[2807]: E0416 01:08:50.607167 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:08:50.621780 kubelet[2807]: E0416 01:08:50.615891 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.621780 kubelet[2807]: W0416 01:08:50.615911 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.621780 kubelet[2807]: E0416 01:08:50.615957 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.621780 kubelet[2807]: E0416 01:08:50.616213 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.621780 kubelet[2807]: W0416 01:08:50.616651 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.621780 kubelet[2807]: E0416 01:08:50.616660 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.621780 kubelet[2807]: E0416 01:08:50.617034 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.621780 kubelet[2807]: W0416 01:08:50.617040 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.621780 kubelet[2807]: E0416 01:08:50.617047 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.651867 kubelet[2807]: E0416 01:08:50.637042 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.651867 kubelet[2807]: W0416 01:08:50.637163 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.651867 kubelet[2807]: E0416 01:08:50.637634 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.666780 kubelet[2807]: E0416 01:08:50.666170 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:50.668721 kubelet[2807]: E0416 01:08:50.667206 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:50.679847 kubelet[2807]: E0416 01:08:50.672634 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.679847 kubelet[2807]: W0416 01:08:50.672652 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.679847 kubelet[2807]: E0416 01:08:50.672669 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.679847 kubelet[2807]: E0416 01:08:50.674864 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.679847 kubelet[2807]: W0416 01:08:50.674872 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.679847 kubelet[2807]: E0416 01:08:50.674881 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.679847 kubelet[2807]: E0416 01:08:50.674999 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.679847 kubelet[2807]: W0416 01:08:50.675005 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.679847 kubelet[2807]: E0416 01:08:50.675012 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.679847 kubelet[2807]: E0416 01:08:50.676853 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.680072 kubelet[2807]: W0416 01:08:50.676862 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.680072 kubelet[2807]: E0416 01:08:50.676872 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.680072 kubelet[2807]: E0416 01:08:50.677173 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.680072 kubelet[2807]: W0416 01:08:50.677181 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.680072 kubelet[2807]: E0416 01:08:50.677189 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.680072 kubelet[2807]: E0416 01:08:50.677718 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.680072 kubelet[2807]: W0416 01:08:50.677727 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.680072 kubelet[2807]: E0416 01:08:50.677735 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.680072 kubelet[2807]: E0416 01:08:50.677857 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.680072 kubelet[2807]: W0416 01:08:50.677865 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.680625 kubelet[2807]: E0416 01:08:50.677872 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.680625 kubelet[2807]: E0416 01:08:50.679989 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.680625 kubelet[2807]: W0416 01:08:50.679998 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.680625 kubelet[2807]: E0416 01:08:50.680008 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.734066 kubelet[2807]: E0416 01:08:50.729161 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.734066 kubelet[2807]: W0416 01:08:50.729663 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.734066 kubelet[2807]: E0416 01:08:50.729737 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.749913 kubelet[2807]: E0416 01:08:50.736861 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.749913 kubelet[2807]: W0416 01:08:50.741203 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.749913 kubelet[2807]: E0416 01:08:50.747753 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.749913 kubelet[2807]: E0416 01:08:50.748646 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.749913 kubelet[2807]: W0416 01:08:50.748658 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.749913 kubelet[2807]: E0416 01:08:50.748672 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.883079 kubelet[2807]: E0416 01:08:50.877626 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.883079 kubelet[2807]: W0416 01:08:50.877919 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.883079 kubelet[2807]: E0416 01:08:50.878033 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.883079 kubelet[2807]: E0416 01:08:50.882835 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.883079 kubelet[2807]: W0416 01:08:50.882848 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.883079 kubelet[2807]: E0416 01:08:50.882863 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.883797 kubelet[2807]: E0416 01:08:50.883626 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.883797 kubelet[2807]: W0416 01:08:50.883636 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.883797 kubelet[2807]: E0416 01:08:50.883646 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.883853 kubelet[2807]: E0416 01:08:50.883847 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.883913 kubelet[2807]: W0416 01:08:50.883855 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.883913 kubelet[2807]: E0416 01:08:50.883864 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.901882 kubelet[2807]: E0416 01:08:50.888939 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.901882 kubelet[2807]: W0416 01:08:50.888953 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.901882 kubelet[2807]: E0416 01:08:50.888967 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.901882 kubelet[2807]: E0416 01:08:50.889109 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.901882 kubelet[2807]: W0416 01:08:50.889116 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.901882 kubelet[2807]: E0416 01:08:50.889131 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.901882 kubelet[2807]: E0416 01:08:50.893834 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.901882 kubelet[2807]: W0416 01:08:50.893923 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.901882 kubelet[2807]: E0416 01:08:50.894057 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.901882 kubelet[2807]: E0416 01:08:50.899606 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.902751 kubelet[2807]: W0416 01:08:50.899627 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.902751 kubelet[2807]: E0416 01:08:50.899749 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.902751 kubelet[2807]: E0416 01:08:50.901063 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.902751 kubelet[2807]: W0416 01:08:50.901076 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.902751 kubelet[2807]: E0416 01:08:50.901089 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.905007 kubelet[2807]: E0416 01:08:50.903909 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.905007 kubelet[2807]: W0416 01:08:50.903922 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.905007 kubelet[2807]: E0416 01:08:50.903933 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.905007 kubelet[2807]: E0416 01:08:50.904200 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.905007 kubelet[2807]: W0416 01:08:50.904207 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.911641 kubelet[2807]: E0416 01:08:50.906552 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.911641 kubelet[2807]: E0416 01:08:50.908032 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.911641 kubelet[2807]: W0416 01:08:50.908042 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.911641 kubelet[2807]: E0416 01:08:50.908052 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.921932 kubelet[2807]: E0416 01:08:50.918962 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.921932 kubelet[2807]: W0416 01:08:50.919121 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.921932 kubelet[2807]: E0416 01:08:50.919136 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.921932 kubelet[2807]: E0416 01:08:50.920895 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.921932 kubelet[2807]: W0416 01:08:50.920904 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.921932 kubelet[2807]: E0416 01:08:50.920915 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:50.926137 kubelet[2807]: E0416 01:08:50.925765 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:50.926137 kubelet[2807]: W0416 01:08:50.925913 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:50.926137 kubelet[2807]: E0416 01:08:50.925929 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.010993 kubelet[2807]: E0416 01:08:51.009127 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.010993 kubelet[2807]: W0416 01:08:51.009619 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.010993 kubelet[2807]: E0416 01:08:51.009768 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.028181 kubelet[2807]: E0416 01:08:51.025005 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.028181 kubelet[2807]: W0416 01:08:51.025906 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.030049 kubelet[2807]: E0416 01:08:51.025979 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.035736 kubelet[2807]: E0416 01:08:51.035088 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.035736 kubelet[2807]: W0416 01:08:51.035112 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.035736 kubelet[2807]: E0416 01:08:51.035159 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.050612 kubelet[2807]: E0416 01:08:51.050134 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.052104 kubelet[2807]: W0416 01:08:51.052080 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.052210 kubelet[2807]: E0416 01:08:51.052198 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.083053 kubelet[2807]: E0416 01:08:51.082165 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.083737 kubelet[2807]: W0416 01:08:51.083715 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.083902 kubelet[2807]: E0416 01:08:51.083891 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.095841 kubelet[2807]: E0416 01:08:51.094670 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.095841 kubelet[2807]: W0416 01:08:51.094829 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.095841 kubelet[2807]: E0416 01:08:51.094857 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.114054 kubelet[2807]: E0416 01:08:51.110735 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.114054 kubelet[2807]: W0416 01:08:51.110832 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.114054 kubelet[2807]: E0416 01:08:51.111826 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.114054 kubelet[2807]: E0416 01:08:51.112201 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.114054 kubelet[2807]: W0416 01:08:51.112606 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.114054 kubelet[2807]: E0416 01:08:51.112621 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.114054 kubelet[2807]: E0416 01:08:51.112921 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.114054 kubelet[2807]: W0416 01:08:51.112929 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.114054 kubelet[2807]: E0416 01:08:51.112936 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.114054 kubelet[2807]: E0416 01:08:51.113058 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.114897 kubelet[2807]: W0416 01:08:51.113065 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.114897 kubelet[2807]: E0416 01:08:51.113073 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.117632 kubelet[2807]: E0416 01:08:51.115739 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.117632 kubelet[2807]: W0416 01:08:51.115756 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.117632 kubelet[2807]: E0416 01:08:51.115769 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.117632 kubelet[2807]: E0416 01:08:51.117018 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.117632 kubelet[2807]: W0416 01:08:51.117028 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.117632 kubelet[2807]: E0416 01:08:51.117039 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.120042 kubelet[2807]: E0416 01:08:51.119091 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.120042 kubelet[2807]: W0416 01:08:51.119865 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.120042 kubelet[2807]: E0416 01:08:51.119881 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.120114 kubelet[2807]: E0416 01:08:51.120076 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.120114 kubelet[2807]: W0416 01:08:51.120085 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.120114 kubelet[2807]: E0416 01:08:51.120095 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.122191 kubelet[2807]: E0416 01:08:51.121755 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.122191 kubelet[2807]: W0416 01:08:51.121769 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.122191 kubelet[2807]: E0416 01:08:51.121781 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.123087 kubelet[2807]: E0416 01:08:51.123022 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.123087 kubelet[2807]: W0416 01:08:51.123034 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.123087 kubelet[2807]: E0416 01:08:51.123047 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.126154 kubelet[2807]: E0416 01:08:51.124025 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.126154 kubelet[2807]: W0416 01:08:51.124187 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.126154 kubelet[2807]: E0416 01:08:51.124201 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.137153 kubelet[2807]: E0416 01:08:51.135692 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.137153 kubelet[2807]: W0416 01:08:51.135750 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.137153 kubelet[2807]: E0416 01:08:51.135767 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.522734 containerd[1593]: time="2026-04-16T01:08:51.520130078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:08:51.529641 containerd[1593]: time="2026-04-16T01:08:51.525741541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 16 01:08:51.531083 containerd[1593]: time="2026-04-16T01:08:51.530935696Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:08:51.539027 containerd[1593]: time="2026-04-16T01:08:51.538875305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:08:51.545960 containerd[1593]: time="2026-04-16T01:08:51.543593421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 3.101254809s" Apr 16 01:08:51.545960 containerd[1593]: time="2026-04-16T01:08:51.543688859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 01:08:51.604880 containerd[1593]: time="2026-04-16T01:08:51.603991973Z" level=info msg="CreateContainer within sandbox \"f5744337ae5a2ac90ce606166fbde12dc2970281636edfca6c69a1589d756bbd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 01:08:51.655022 kubelet[2807]: E0416 01:08:51.654810 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:51.763106 kubelet[2807]: E0416 01:08:51.762415 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.763106 kubelet[2807]: W0416 01:08:51.762671 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.763106 kubelet[2807]: E0416 01:08:51.762692 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.766086 containerd[1593]: time="2026-04-16T01:08:51.765139043Z" level=info msg="CreateContainer within sandbox \"f5744337ae5a2ac90ce606166fbde12dc2970281636edfca6c69a1589d756bbd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1554578c7141ed88c30a98bd798fbc079198a0c661b92243018aea441bffc68d\"" Apr 16 01:08:51.767926 kubelet[2807]: E0416 01:08:51.767653 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.767926 kubelet[2807]: W0416 01:08:51.767672 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.767926 kubelet[2807]: E0416 01:08:51.767689 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.769952 kubelet[2807]: E0416 01:08:51.769702 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.771931 kubelet[2807]: W0416 01:08:51.771071 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.771931 kubelet[2807]: E0416 01:08:51.771207 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.780832 kubelet[2807]: E0416 01:08:51.777792 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.780832 kubelet[2807]: W0416 01:08:51.777947 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.780832 kubelet[2807]: E0416 01:08:51.777972 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.785191 kubelet[2807]: E0416 01:08:51.781121 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.785191 kubelet[2807]: W0416 01:08:51.781140 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.785191 kubelet[2807]: E0416 01:08:51.781161 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.785191 kubelet[2807]: E0416 01:08:51.781820 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.785191 kubelet[2807]: W0416 01:08:51.781828 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.785191 kubelet[2807]: E0416 01:08:51.781837 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.785191 kubelet[2807]: E0416 01:08:51.781944 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.785191 kubelet[2807]: W0416 01:08:51.781950 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.785191 kubelet[2807]: E0416 01:08:51.784904 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.788763 containerd[1593]: time="2026-04-16T01:08:51.786020373Z" level=info msg="StartContainer for \"1554578c7141ed88c30a98bd798fbc079198a0c661b92243018aea441bffc68d\"" Apr 16 01:08:51.788816 kubelet[2807]: E0416 01:08:51.786870 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.788816 kubelet[2807]: W0416 01:08:51.786885 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.788816 kubelet[2807]: E0416 01:08:51.786900 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.788816 kubelet[2807]: E0416 01:08:51.787096 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.788816 kubelet[2807]: W0416 01:08:51.787140 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.788816 kubelet[2807]: E0416 01:08:51.787150 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.788816 kubelet[2807]: E0416 01:08:51.787659 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.788816 kubelet[2807]: W0416 01:08:51.787668 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.788816 kubelet[2807]: E0416 01:08:51.787677 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.788816 kubelet[2807]: E0416 01:08:51.787884 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.789025 kubelet[2807]: W0416 01:08:51.787891 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.789025 kubelet[2807]: E0416 01:08:51.787899 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.789025 kubelet[2807]: E0416 01:08:51.788048 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.789025 kubelet[2807]: W0416 01:08:51.788054 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.789025 kubelet[2807]: E0416 01:08:51.788061 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.789025 kubelet[2807]: E0416 01:08:51.788174 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.789025 kubelet[2807]: W0416 01:08:51.788182 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.789025 kubelet[2807]: E0416 01:08:51.788189 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.789025 kubelet[2807]: E0416 01:08:51.788964 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.789025 kubelet[2807]: W0416 01:08:51.788973 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.789579 kubelet[2807]: E0416 01:08:51.788983 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.789579 kubelet[2807]: E0416 01:08:51.789110 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.789579 kubelet[2807]: W0416 01:08:51.789117 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.789579 kubelet[2807]: E0416 01:08:51.789124 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.794018 kubelet[2807]: E0416 01:08:51.792944 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.794018 kubelet[2807]: W0416 01:08:51.793103 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.794018 kubelet[2807]: E0416 01:08:51.793115 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.802152 kubelet[2807]: E0416 01:08:51.802040 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.809911 kubelet[2807]: W0416 01:08:51.805117 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.809911 kubelet[2807]: E0416 01:08:51.805146 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.829696 kubelet[2807]: E0416 01:08:51.828662 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.829696 kubelet[2807]: W0416 01:08:51.828808 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.829696 kubelet[2807]: E0416 01:08:51.828843 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.831166 kubelet[2807]: E0416 01:08:51.830886 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.831166 kubelet[2807]: W0416 01:08:51.830906 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.831166 kubelet[2807]: E0416 01:08:51.830920 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.834833 kubelet[2807]: E0416 01:08:51.831964 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.834833 kubelet[2807]: W0416 01:08:51.832137 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.834833 kubelet[2807]: E0416 01:08:51.832150 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.841849 kubelet[2807]: E0416 01:08:51.838044 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.841849 kubelet[2807]: W0416 01:08:51.838723 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.841849 kubelet[2807]: E0416 01:08:51.838755 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.846932 kubelet[2807]: E0416 01:08:51.846158 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.846932 kubelet[2807]: W0416 01:08:51.846173 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.846932 kubelet[2807]: E0416 01:08:51.846186 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.889564 kubelet[2807]: E0416 01:08:51.888931 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.889564 kubelet[2807]: W0416 01:08:51.889096 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.889994 kubelet[2807]: E0416 01:08:51.889180 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.922883 kubelet[2807]: E0416 01:08:51.920038 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.922883 kubelet[2807]: W0416 01:08:51.920108 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.922883 kubelet[2807]: E0416 01:08:51.920129 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.961880 kubelet[2807]: E0416 01:08:51.958744 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:51.961880 kubelet[2807]: W0416 01:08:51.958893 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:51.961880 kubelet[2807]: E0416 01:08:51.958966 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:51.999212 kubelet[2807]: E0416 01:08:51.997197 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.037175 kubelet[2807]: W0416 01:08:52.036877 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.037175 kubelet[2807]: E0416 01:08:52.036924 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.044074 kubelet[2807]: I0416 01:08:52.014048 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f78fffc4f-mwfvh" podStartSLOduration=7.548341021 podStartE2EDuration="16.014035927s" podCreationTimestamp="2026-04-16 01:08:36 +0000 UTC" firstStartedPulling="2026-04-16 01:08:39.975208543 +0000 UTC m=+68.646638492" lastFinishedPulling="2026-04-16 01:08:48.440903455 +0000 UTC m=+77.112333398" observedRunningTime="2026-04-16 01:08:49.789987333 +0000 UTC m=+78.461417283" watchObservedRunningTime="2026-04-16 01:08:52.014035927 +0000 UTC m=+80.685465880" Apr 16 01:08:52.044074 kubelet[2807]: E0416 01:08:52.039586 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.044074 kubelet[2807]: W0416 01:08:52.039598 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.044074 kubelet[2807]: E0416 01:08:52.039611 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.063717 kubelet[2807]: E0416 01:08:52.059723 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.065764 kubelet[2807]: W0416 01:08:52.064685 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.065764 kubelet[2807]: E0416 01:08:52.064725 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.068702 kubelet[2807]: E0416 01:08:52.068690 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.069022 kubelet[2807]: W0416 01:08:52.068747 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.069022 kubelet[2807]: E0416 01:08:52.068762 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.083056 kubelet[2807]: E0416 01:08:52.082689 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.083056 kubelet[2807]: W0416 01:08:52.082754 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.083056 kubelet[2807]: E0416 01:08:52.082778 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.099780 kubelet[2807]: E0416 01:08:52.095898 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.131586 kubelet[2807]: W0416 01:08:52.127723 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.131586 kubelet[2807]: E0416 01:08:52.127927 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.133975 kubelet[2807]: E0416 01:08:52.131956 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.133975 kubelet[2807]: W0416 01:08:52.131973 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.133975 kubelet[2807]: E0416 01:08:52.131989 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.301899 kubelet[2807]: E0416 01:08:52.297735 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.301899 kubelet[2807]: W0416 01:08:52.297756 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.301899 kubelet[2807]: E0416 01:08:52.297771 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.556916 kubelet[2807]: E0416 01:08:52.552701 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:08:52.689153 kubelet[2807]: E0416 01:08:52.688883 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.689153 kubelet[2807]: W0416 01:08:52.689070 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.689153 kubelet[2807]: E0416 01:08:52.689092 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.690968 kubelet[2807]: E0416 01:08:52.690202 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:52.693907 kubelet[2807]: E0416 01:08:52.691210 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.693907 kubelet[2807]: W0416 01:08:52.691664 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.693907 kubelet[2807]: E0416 01:08:52.691676 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.712826 kubelet[2807]: E0416 01:08:52.707867 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.712826 kubelet[2807]: W0416 01:08:52.707923 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.712826 kubelet[2807]: E0416 01:08:52.707954 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.719788 kubelet[2807]: E0416 01:08:52.713622 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.719788 kubelet[2807]: W0416 01:08:52.713646 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.719788 kubelet[2807]: E0416 01:08:52.713670 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.719788 kubelet[2807]: E0416 01:08:52.717173 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.719788 kubelet[2807]: W0416 01:08:52.717186 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.719788 kubelet[2807]: E0416 01:08:52.717198 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.719788 kubelet[2807]: E0416 01:08:52.718825 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.719788 kubelet[2807]: W0416 01:08:52.718837 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.719788 kubelet[2807]: E0416 01:08:52.718848 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.720061 kubelet[2807]: E0416 01:08:52.720045 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.720804 kubelet[2807]: W0416 01:08:52.720134 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.720804 kubelet[2807]: E0416 01:08:52.720145 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.721762 kubelet[2807]: E0416 01:08:52.721753 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.721837 kubelet[2807]: W0416 01:08:52.721789 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.721837 kubelet[2807]: E0416 01:08:52.721799 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.725022 kubelet[2807]: E0416 01:08:52.725006 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.726030 kubelet[2807]: W0416 01:08:52.725926 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.726030 kubelet[2807]: E0416 01:08:52.725946 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.732079 kubelet[2807]: E0416 01:08:52.731988 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.733605 kubelet[2807]: W0416 01:08:52.732920 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.733605 kubelet[2807]: E0416 01:08:52.732947 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.752934 kubelet[2807]: E0416 01:08:52.751983 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.752934 kubelet[2807]: W0416 01:08:52.752002 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.752934 kubelet[2807]: E0416 01:08:52.752022 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.759019 containerd[1593]: time="2026-04-16T01:08:52.758813745Z" level=info msg="StartContainer for \"1554578c7141ed88c30a98bd798fbc079198a0c661b92243018aea441bffc68d\" returns successfully" Apr 16 01:08:52.760653 kubelet[2807]: E0416 01:08:52.759653 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.760653 kubelet[2807]: W0416 01:08:52.759898 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.760653 kubelet[2807]: E0416 01:08:52.759912 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.760653 kubelet[2807]: E0416 01:08:52.760075 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.760653 kubelet[2807]: W0416 01:08:52.760081 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.760653 kubelet[2807]: E0416 01:08:52.760088 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.760653 kubelet[2807]: E0416 01:08:52.760192 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.760653 kubelet[2807]: W0416 01:08:52.760196 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.760653 kubelet[2807]: E0416 01:08:52.760201 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.760838 kubelet[2807]: E0416 01:08:52.760708 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.760838 kubelet[2807]: W0416 01:08:52.760715 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.760838 kubelet[2807]: E0416 01:08:52.760724 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.781753 kubelet[2807]: E0416 01:08:52.779829 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.781753 kubelet[2807]: W0416 01:08:52.779855 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.781753 kubelet[2807]: E0416 01:08:52.779876 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.790960 kubelet[2807]: E0416 01:08:52.790204 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.790960 kubelet[2807]: W0416 01:08:52.790678 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.790960 kubelet[2807]: E0416 01:08:52.790721 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.800627 kubelet[2807]: E0416 01:08:52.800063 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.801113 kubelet[2807]: W0416 01:08:52.801088 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.801167 kubelet[2807]: E0416 01:08:52.801159 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.807058 kubelet[2807]: E0416 01:08:52.806202 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.816140 kubelet[2807]: W0416 01:08:52.807205 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.816140 kubelet[2807]: E0416 01:08:52.807547 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.816140 kubelet[2807]: E0416 01:08:52.809735 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.816140 kubelet[2807]: W0416 01:08:52.809745 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.816140 kubelet[2807]: E0416 01:08:52.809755 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.832066 kubelet[2807]: E0416 01:08:52.832031 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.833671 kubelet[2807]: W0416 01:08:52.832887 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.833671 kubelet[2807]: E0416 01:08:52.832914 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.836105 kubelet[2807]: E0416 01:08:52.836092 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.836201 kubelet[2807]: W0416 01:08:52.836189 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.836768 kubelet[2807]: E0416 01:08:52.836754 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.844865 kubelet[2807]: E0416 01:08:52.844842 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.844955 kubelet[2807]: W0416 01:08:52.844945 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.844995 kubelet[2807]: E0416 01:08:52.844987 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.845760 kubelet[2807]: E0416 01:08:52.845749 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.845823 kubelet[2807]: W0416 01:08:52.845815 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.845861 kubelet[2807]: E0416 01:08:52.845855 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.847148 kubelet[2807]: E0416 01:08:52.847135 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.847211 kubelet[2807]: W0416 01:08:52.847203 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.849063 kubelet[2807]: E0416 01:08:52.847766 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.856890 kubelet[2807]: E0416 01:08:52.856704 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.857788 kubelet[2807]: W0416 01:08:52.857770 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.857863 kubelet[2807]: E0416 01:08:52.857853 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.860003 kubelet[2807]: E0416 01:08:52.859989 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.860066 kubelet[2807]: W0416 01:08:52.860060 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.860104 kubelet[2807]: E0416 01:08:52.860096 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.862991 kubelet[2807]: E0416 01:08:52.862865 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.862991 kubelet[2807]: W0416 01:08:52.862881 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.862991 kubelet[2807]: E0416 01:08:52.862894 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.868134 kubelet[2807]: E0416 01:08:52.868008 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.868134 kubelet[2807]: W0416 01:08:52.868024 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.868134 kubelet[2807]: E0416 01:08:52.868036 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:52.871641 kubelet[2807]: E0416 01:08:52.869153 2807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:08:52.871641 kubelet[2807]: W0416 01:08:52.869168 2807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:08:52.871641 kubelet[2807]: E0416 01:08:52.869180 2807 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:08:53.091199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1554578c7141ed88c30a98bd798fbc079198a0c661b92243018aea441bffc68d-rootfs.mount: Deactivated successfully. Apr 16 01:08:53.248783 containerd[1593]: time="2026-04-16T01:08:53.242833120Z" level=info msg="shim disconnected" id=1554578c7141ed88c30a98bd798fbc079198a0c661b92243018aea441bffc68d namespace=k8s.io Apr 16 01:08:53.248783 containerd[1593]: time="2026-04-16T01:08:53.242988652Z" level=warning msg="cleaning up after shim disconnected" id=1554578c7141ed88c30a98bd798fbc079198a0c661b92243018aea441bffc68d namespace=k8s.io Apr 16 01:08:53.248783 containerd[1593]: time="2026-04-16T01:08:53.242995915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:08:53.546362 kubelet[2807]: E0416 01:08:53.546093 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:53.724878 containerd[1593]: time="2026-04-16T01:08:53.723807235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 01:08:54.550913 kubelet[2807]: E0416 01:08:54.550180 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:08:56.565757 kubelet[2807]: E0416 01:08:56.565634 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:08:57.549037 kubelet[2807]: E0416 01:08:57.548870 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:08:58.548789 kubelet[2807]: E0416 01:08:58.545981 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:00.560870 kubelet[2807]: E0416 01:09:00.549734 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:02.547166 kubelet[2807]: E0416 01:09:02.545822 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:04.542530 kubelet[2807]: E0416 01:09:04.541806 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:06.544157 kubelet[2807]: E0416 01:09:06.543777 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:08.555181 kubelet[2807]: E0416 01:09:08.554798 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:10.553043 kubelet[2807]: E0416 01:09:10.551754 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:10.556523 kubelet[2807]: E0416 01:09:10.551933 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:12.654989 kubelet[2807]: E0416 01:09:12.650069 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:14.561974 kubelet[2807]: E0416 01:09:14.561023 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:15.818820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055096067.mount: Deactivated successfully. Apr 16 01:09:15.916781 containerd[1593]: time="2026-04-16T01:09:15.916405167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:15.918557 containerd[1593]: time="2026-04-16T01:09:15.918391290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 01:09:15.923173 containerd[1593]: time="2026-04-16T01:09:15.922758420Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:15.932791 containerd[1593]: time="2026-04-16T01:09:15.931825216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:15.942507 containerd[1593]: time="2026-04-16T01:09:15.942155832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 22.218250648s" Apr 16 01:09:15.942507 containerd[1593]: time="2026-04-16T01:09:15.942382624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 01:09:15.981936 containerd[1593]: time="2026-04-16T01:09:15.981150008Z" level=info msg="CreateContainer within sandbox \"f5744337ae5a2ac90ce606166fbde12dc2970281636edfca6c69a1589d756bbd\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 01:09:16.136815 containerd[1593]: time="2026-04-16T01:09:16.136540576Z" level=info msg="CreateContainer within sandbox \"f5744337ae5a2ac90ce606166fbde12dc2970281636edfca6c69a1589d756bbd\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"41bd3baacd415e27be407767937222bd0dfb5bf3002fac9eeaefc9d1c516f2ad\"" Apr 16 01:09:16.137830 containerd[1593]: time="2026-04-16T01:09:16.137723249Z" level=info msg="StartContainer for \"41bd3baacd415e27be407767937222bd0dfb5bf3002fac9eeaefc9d1c516f2ad\"" Apr 16 01:09:16.490958 containerd[1593]: time="2026-04-16T01:09:16.490348585Z" level=info msg="StartContainer for \"41bd3baacd415e27be407767937222bd0dfb5bf3002fac9eeaefc9d1c516f2ad\" returns successfully" Apr 16 01:09:16.549928 kubelet[2807]: E0416 01:09:16.546203 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:16.823050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41bd3baacd415e27be407767937222bd0dfb5bf3002fac9eeaefc9d1c516f2ad-rootfs.mount: Deactivated successfully. Apr 16 01:09:16.993395 containerd[1593]: time="2026-04-16T01:09:16.992960632Z" level=info msg="shim disconnected" id=41bd3baacd415e27be407767937222bd0dfb5bf3002fac9eeaefc9d1c516f2ad namespace=k8s.io Apr 16 01:09:16.993395 containerd[1593]: time="2026-04-16T01:09:16.993131081Z" level=warning msg="cleaning up after shim disconnected" id=41bd3baacd415e27be407767937222bd0dfb5bf3002fac9eeaefc9d1c516f2ad namespace=k8s.io Apr 16 01:09:16.993395 containerd[1593]: time="2026-04-16T01:09:16.993150417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:09:17.173181 containerd[1593]: time="2026-04-16T01:09:17.171143170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 01:09:18.555685 kubelet[2807]: E0416 01:09:18.555469 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:20.548988 kubelet[2807]: E0416 01:09:20.547682 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:22.223867 containerd[1593]: time="2026-04-16T01:09:22.223514334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:22.227762 containerd[1593]: time="2026-04-16T01:09:22.227158218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 01:09:22.230297 containerd[1593]: time="2026-04-16T01:09:22.230083938Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:22.262739 containerd[1593]: time="2026-04-16T01:09:22.262371498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:22.263888 containerd[1593]: time="2026-04-16T01:09:22.263759885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.092397014s" Apr 16 01:09:22.263888 containerd[1593]: time="2026-04-16T01:09:22.263857073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 01:09:22.292612 containerd[1593]: time="2026-04-16T01:09:22.291883233Z" level=info msg="CreateContainer within sandbox \"f5744337ae5a2ac90ce606166fbde12dc2970281636edfca6c69a1589d756bbd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 01:09:22.359584 containerd[1593]: time="2026-04-16T01:09:22.359341643Z" level=info msg="CreateContainer within sandbox \"f5744337ae5a2ac90ce606166fbde12dc2970281636edfca6c69a1589d756bbd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8d7e30379400f8ba880d630f81712b81b09ac87f76e4a900d3df85548e8d72e3\"" Apr 16 01:09:22.369403 containerd[1593]: time="2026-04-16T01:09:22.369184538Z" level=info msg="StartContainer for \"8d7e30379400f8ba880d630f81712b81b09ac87f76e4a900d3df85548e8d72e3\"" Apr 16 01:09:22.548117 kubelet[2807]: E0416 01:09:22.547382 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:22.559900 systemd[1]: run-containerd-runc-k8s.io-8d7e30379400f8ba880d630f81712b81b09ac87f76e4a900d3df85548e8d72e3-runc.VkdB41.mount: Deactivated successfully. Apr 16 01:09:22.761640 containerd[1593]: time="2026-04-16T01:09:22.758367031Z" level=info msg="StartContainer for \"8d7e30379400f8ba880d630f81712b81b09ac87f76e4a900d3df85548e8d72e3\" returns successfully" Apr 16 01:09:24.550926 kubelet[2807]: E0416 01:09:24.548075 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:25.087056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d7e30379400f8ba880d630f81712b81b09ac87f76e4a900d3df85548e8d72e3-rootfs.mount: Deactivated successfully. Apr 16 01:09:25.098364 kubelet[2807]: I0416 01:09:25.098167 2807 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 16 01:09:25.113428 containerd[1593]: time="2026-04-16T01:09:25.110125182Z" level=info msg="shim disconnected" id=8d7e30379400f8ba880d630f81712b81b09ac87f76e4a900d3df85548e8d72e3 namespace=k8s.io Apr 16 01:09:25.113428 containerd[1593]: time="2026-04-16T01:09:25.110571870Z" level=warning msg="cleaning up after shim disconnected" id=8d7e30379400f8ba880d630f81712b81b09ac87f76e4a900d3df85548e8d72e3 namespace=k8s.io Apr 16 01:09:25.113428 containerd[1593]: time="2026-04-16T01:09:25.110587930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:09:26.131711 kubelet[2807]: I0416 01:09:26.124160 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e305132-072a-4841-9d59-183ab9643f4e-config-volume\") pod \"coredns-674b8bbfcf-28mcm\" (UID: \"9e305132-072a-4841-9d59-183ab9643f4e\") " pod="kube-system/coredns-674b8bbfcf-28mcm" Apr 16 01:09:26.221919 kubelet[2807]: I0416 01:09:26.132071 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smb2n\" (UniqueName: \"kubernetes.io/projected/9e305132-072a-4841-9d59-183ab9643f4e-kube-api-access-smb2n\") pod \"coredns-674b8bbfcf-28mcm\" (UID: \"9e305132-072a-4841-9d59-183ab9643f4e\") " pod="kube-system/coredns-674b8bbfcf-28mcm" Apr 16 01:09:26.385687 kubelet[2807]: I0416 01:09:26.320333 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b148b156-4c3c-440d-9a9c-de6e9bd705a3-config-volume\") pod \"coredns-674b8bbfcf-wt9ng\" (UID: \"b148b156-4c3c-440d-9a9c-de6e9bd705a3\") " pod="kube-system/coredns-674b8bbfcf-wt9ng" Apr 16 01:09:26.418144 kubelet[2807]: I0416 01:09:26.417811 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js7tx\" (UniqueName: \"kubernetes.io/projected/b148b156-4c3c-440d-9a9c-de6e9bd705a3-kube-api-access-js7tx\") pod \"coredns-674b8bbfcf-wt9ng\" (UID: \"b148b156-4c3c-440d-9a9c-de6e9bd705a3\") " pod="kube-system/coredns-674b8bbfcf-wt9ng" Apr 16 01:09:26.745050 kubelet[2807]: I0416 01:09:26.741507 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z8h8\" (UniqueName: \"kubernetes.io/projected/aebb0dae-448b-478a-a00a-811005b5982c-kube-api-access-4z8h8\") pod \"calico-kube-controllers-c4f75b597-sfg9g\" (UID: \"aebb0dae-448b-478a-a00a-811005b5982c\") " pod="calico-system/calico-kube-controllers-c4f75b597-sfg9g" Apr 16 01:09:26.745050 kubelet[2807]: I0416 01:09:26.744712 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aebb0dae-448b-478a-a00a-811005b5982c-tigera-ca-bundle\") pod \"calico-kube-controllers-c4f75b597-sfg9g\" (UID: \"aebb0dae-448b-478a-a00a-811005b5982c\") " pod="calico-system/calico-kube-controllers-c4f75b597-sfg9g" Apr 16 01:09:27.152796 kubelet[2807]: I0416 01:09:27.152028 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfz5f\" (UniqueName: \"kubernetes.io/projected/0a26e0b5-baae-47de-8478-3a9191a4d5e8-kube-api-access-sfz5f\") pod \"calico-apiserver-68bd47d56c-vlgfc\" (UID: \"0a26e0b5-baae-47de-8478-3a9191a4d5e8\") " pod="calico-system/calico-apiserver-68bd47d56c-vlgfc" Apr 16 01:09:27.223764 kubelet[2807]: I0416 01:09:27.223534 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0a26e0b5-baae-47de-8478-3a9191a4d5e8-calico-apiserver-certs\") pod \"calico-apiserver-68bd47d56c-vlgfc\" (UID: \"0a26e0b5-baae-47de-8478-3a9191a4d5e8\") " pod="calico-system/calico-apiserver-68bd47d56c-vlgfc" Apr 16 01:09:27.332883 kubelet[2807]: I0416 01:09:27.329560 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-gmdnv\" (UID: \"0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe\") " pod="calico-system/goldmane-5b85766d88-gmdnv" Apr 16 01:09:27.374056 kubelet[2807]: I0416 01:09:27.370554 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fldkf\" (UniqueName: \"kubernetes.io/projected/0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe-kube-api-access-fldkf\") pod \"goldmane-5b85766d88-gmdnv\" (UID: \"0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe\") " pod="calico-system/goldmane-5b85766d88-gmdnv" Apr 16 01:09:27.374056 kubelet[2807]: I0416 01:09:27.370787 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe-config\") pod \"goldmane-5b85766d88-gmdnv\" (UID: \"0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe\") " pod="calico-system/goldmane-5b85766d88-gmdnv" Apr 16 01:09:27.374056 kubelet[2807]: I0416 01:09:27.370821 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe-goldmane-key-pair\") pod \"goldmane-5b85766d88-gmdnv\" (UID: \"0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe\") " pod="calico-system/goldmane-5b85766d88-gmdnv" Apr 16 01:09:28.093017 kubelet[2807]: E0416 01:09:28.091554 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:28.102942 kubelet[2807]: E0416 01:09:28.102865 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:28.112213 containerd[1593]: time="2026-04-16T01:09:28.111985820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wt9ng,Uid:b148b156-4c3c-440d-9a9c-de6e9bd705a3,Namespace:kube-system,Attempt:0,}" Apr 16 01:09:28.116802 containerd[1593]: time="2026-04-16T01:09:28.114870669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-28mcm,Uid:9e305132-072a-4841-9d59-183ab9643f4e,Namespace:kube-system,Attempt:0,}" Apr 16 01:09:28.130051 containerd[1593]: time="2026-04-16T01:09:28.130004650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqrfc,Uid:73d74924-8e40-46ed-8ff0-31c0cdbb144c,Namespace:calico-system,Attempt:0,}" Apr 16 01:09:28.233593 kubelet[2807]: I0416 01:09:28.223823 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4f5df53a-941c-45fc-a689-430c9f635b42-nginx-config\") pod \"whisker-78d845779-8pjtj\" (UID: \"4f5df53a-941c-45fc-a689-430c9f635b42\") " pod="calico-system/whisker-78d845779-8pjtj" Apr 16 01:09:28.233593 kubelet[2807]: I0416 01:09:28.223919 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47rfw\" (UniqueName: \"kubernetes.io/projected/4f5df53a-941c-45fc-a689-430c9f635b42-kube-api-access-47rfw\") pod \"whisker-78d845779-8pjtj\" (UID: \"4f5df53a-941c-45fc-a689-430c9f635b42\") " pod="calico-system/whisker-78d845779-8pjtj" Apr 16 01:09:28.233593 kubelet[2807]: I0416 01:09:28.223939 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f5df53a-941c-45fc-a689-430c9f635b42-whisker-ca-bundle\") pod \"whisker-78d845779-8pjtj\" (UID: \"4f5df53a-941c-45fc-a689-430c9f635b42\") " pod="calico-system/whisker-78d845779-8pjtj" Apr 16 01:09:28.233593 kubelet[2807]: I0416 01:09:28.223962 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f5df53a-941c-45fc-a689-430c9f635b42-whisker-backend-key-pair\") pod \"whisker-78d845779-8pjtj\" (UID: \"4f5df53a-941c-45fc-a689-430c9f635b42\") " pod="calico-system/whisker-78d845779-8pjtj" Apr 16 01:09:28.313688 containerd[1593]: time="2026-04-16T01:09:28.311479113Z" level=info msg="CreateContainer within sandbox \"f5744337ae5a2ac90ce606166fbde12dc2970281636edfca6c69a1589d756bbd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 01:09:28.469487 containerd[1593]: time="2026-04-16T01:09:28.464840063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gmdnv,Uid:0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe,Namespace:calico-system,Attempt:0,}" Apr 16 01:09:28.469487 containerd[1593]: time="2026-04-16T01:09:28.467084950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bd47d56c-vlgfc,Uid:0a26e0b5-baae-47de-8478-3a9191a4d5e8,Namespace:calico-system,Attempt:0,}" Apr 16 01:09:28.477207 kubelet[2807]: I0416 01:09:28.471380 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/091fc483-3bbd-4649-ab92-475b732c9825-calico-apiserver-certs\") pod \"calico-apiserver-68bd47d56c-kk7cd\" (UID: \"091fc483-3bbd-4649-ab92-475b732c9825\") " pod="calico-system/calico-apiserver-68bd47d56c-kk7cd" Apr 16 01:09:28.477207 kubelet[2807]: I0416 01:09:28.471427 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnp82\" (UniqueName: \"kubernetes.io/projected/091fc483-3bbd-4649-ab92-475b732c9825-kube-api-access-pnp82\") pod \"calico-apiserver-68bd47d56c-kk7cd\" (UID: \"091fc483-3bbd-4649-ab92-475b732c9825\") " pod="calico-system/calico-apiserver-68bd47d56c-kk7cd" Apr 16 01:09:29.409361 containerd[1593]: time="2026-04-16T01:09:29.375038487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4f75b597-sfg9g,Uid:aebb0dae-448b-478a-a00a-811005b5982c,Namespace:calico-system,Attempt:0,}" Apr 16 01:09:29.863367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869579616.mount: Deactivated successfully. Apr 16 01:09:30.466647 kubelet[2807]: E0416 01:09:30.454486 2807 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.904s" Apr 16 01:09:30.780169 containerd[1593]: time="2026-04-16T01:09:30.772192215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78d845779-8pjtj,Uid:4f5df53a-941c-45fc-a689-430c9f635b42,Namespace:calico-system,Attempt:0,}" Apr 16 01:09:31.640575 containerd[1593]: time="2026-04-16T01:09:31.632393136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bd47d56c-kk7cd,Uid:091fc483-3bbd-4649-ab92-475b732c9825,Namespace:calico-system,Attempt:0,}" Apr 16 01:09:32.715021 containerd[1593]: time="2026-04-16T01:09:32.714675305Z" level=info msg="CreateContainer within sandbox \"f5744337ae5a2ac90ce606166fbde12dc2970281636edfca6c69a1589d756bbd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d6f52d8be2b846cdcf775bd4415ef52b0cac2f3db695926f13fddac5d13129d6\"" Apr 16 01:09:32.996011 containerd[1593]: time="2026-04-16T01:09:32.991115967Z" level=info msg="StartContainer for \"d6f52d8be2b846cdcf775bd4415ef52b0cac2f3db695926f13fddac5d13129d6\"" Apr 16 01:09:34.180775 containerd[1593]: time="2026-04-16T01:09:34.179976500Z" level=error msg="Failed to destroy network for sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.194776 containerd[1593]: time="2026-04-16T01:09:34.194657838Z" level=error msg="encountered an error cleaning up failed sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.196980 containerd[1593]: time="2026-04-16T01:09:34.196025027Z" level=error msg="Failed to destroy network for sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.216406 containerd[1593]: time="2026-04-16T01:09:34.216135238Z" level=error msg="Failed to destroy network for sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.217723 containerd[1593]: time="2026-04-16T01:09:34.217695326Z" level=error msg="encountered an error cleaning up failed sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.249772 containerd[1593]: time="2026-04-16T01:09:34.235024820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-28mcm,Uid:9e305132-072a-4841-9d59-183ab9643f4e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.278131 containerd[1593]: time="2026-04-16T01:09:34.277938678Z" level=error msg="encountered an error cleaning up failed sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.278370 containerd[1593]: time="2026-04-16T01:09:34.278178309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqrfc,Uid:73d74924-8e40-46ed-8ff0-31c0cdbb144c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.301061 kubelet[2807]: E0416 01:09:34.300990 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.301061 kubelet[2807]: E0416 01:09:34.301051 2807 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqrfc" Apr 16 01:09:34.301061 kubelet[2807]: E0416 01:09:34.301069 2807 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gqrfc" Apr 16 01:09:34.301512 kubelet[2807]: E0416 01:09:34.301110 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gqrfc_calico-system(73d74924-8e40-46ed-8ff0-31c0cdbb144c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gqrfc_calico-system(73d74924-8e40-46ed-8ff0-31c0cdbb144c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:34.301512 kubelet[2807]: E0416 01:09:34.301139 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.301512 kubelet[2807]: E0416 01:09:34.301186 2807 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-28mcm" Apr 16 01:09:34.301744 kubelet[2807]: E0416 01:09:34.301202 2807 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-28mcm" Apr 16 01:09:34.301744 kubelet[2807]: E0416 01:09:34.301370 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-28mcm_kube-system(9e305132-072a-4841-9d59-183ab9643f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-28mcm_kube-system(9e305132-072a-4841-9d59-183ab9643f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-28mcm" podUID="9e305132-072a-4841-9d59-183ab9643f4e" Apr 16 01:09:34.334421 containerd[1593]: time="2026-04-16T01:09:34.334177044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wt9ng,Uid:b148b156-4c3c-440d-9a9c-de6e9bd705a3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.347857 kubelet[2807]: E0416 01:09:34.347647 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.348194 kubelet[2807]: E0416 01:09:34.348020 2807 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wt9ng" Apr 16 01:09:34.348194 kubelet[2807]: E0416 01:09:34.348126 2807 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wt9ng" Apr 16 01:09:34.350900 kubelet[2807]: E0416 01:09:34.350178 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wt9ng_kube-system(b148b156-4c3c-440d-9a9c-de6e9bd705a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wt9ng_kube-system(b148b156-4c3c-440d-9a9c-de6e9bd705a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wt9ng" podUID="b148b156-4c3c-440d-9a9c-de6e9bd705a3" Apr 16 01:09:34.434722 containerd[1593]: time="2026-04-16T01:09:34.434484420Z" level=error msg="Failed to destroy network for sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.435557 containerd[1593]: time="2026-04-16T01:09:34.435463026Z" level=error msg="encountered an error cleaning up failed sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.435767 containerd[1593]: time="2026-04-16T01:09:34.435685662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4f75b597-sfg9g,Uid:aebb0dae-448b-478a-a00a-811005b5982c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.446618 kubelet[2807]: E0416 01:09:34.438667 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.446618 kubelet[2807]: E0416 01:09:34.442550 2807 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4f75b597-sfg9g" Apr 16 01:09:34.446618 kubelet[2807]: E0416 01:09:34.443193 2807 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c4f75b597-sfg9g" Apr 16 01:09:34.447722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d-shm.mount: Deactivated successfully. Apr 16 01:09:34.450987 kubelet[2807]: E0416 01:09:34.449960 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c4f75b597-sfg9g_calico-system(aebb0dae-448b-478a-a00a-811005b5982c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c4f75b597-sfg9g_calico-system(aebb0dae-448b-478a-a00a-811005b5982c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4f75b597-sfg9g" podUID="aebb0dae-448b-478a-a00a-811005b5982c" Apr 16 01:09:34.447861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8-shm.mount: Deactivated successfully. Apr 16 01:09:34.447991 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5-shm.mount: Deactivated successfully. Apr 16 01:09:34.463967 containerd[1593]: time="2026-04-16T01:09:34.458557107Z" level=info msg="StartContainer for \"d6f52d8be2b846cdcf775bd4415ef52b0cac2f3db695926f13fddac5d13129d6\" returns successfully" Apr 16 01:09:34.470914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0-shm.mount: Deactivated successfully. Apr 16 01:09:34.613046 containerd[1593]: time="2026-04-16T01:09:34.610450246Z" level=error msg="Failed to destroy network for sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.649381 containerd[1593]: time="2026-04-16T01:09:34.643004117Z" level=error msg="encountered an error cleaning up failed sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.649381 containerd[1593]: time="2026-04-16T01:09:34.643528923Z" level=error msg="Failed to destroy network for sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.649381 containerd[1593]: time="2026-04-16T01:09:34.645885281Z" level=error msg="encountered an error cleaning up failed sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.649381 containerd[1593]: time="2026-04-16T01:09:34.645995836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78d845779-8pjtj,Uid:4f5df53a-941c-45fc-a689-430c9f635b42,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.657817 containerd[1593]: time="2026-04-16T01:09:34.649797643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gmdnv,Uid:0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.658686 kubelet[2807]: E0416 01:09:34.656711 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.658686 kubelet[2807]: E0416 01:09:34.656881 2807 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-gmdnv" Apr 16 01:09:34.658686 kubelet[2807]: E0416 01:09:34.656900 2807 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-gmdnv" Apr 16 01:09:34.650081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff-shm.mount: Deactivated successfully. Apr 16 01:09:34.665839 kubelet[2807]: E0416 01:09:34.657011 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-gmdnv_calico-system(0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-gmdnv_calico-system(0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-gmdnv" podUID="0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe" Apr 16 01:09:34.667412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9-shm.mount: Deactivated successfully. Apr 16 01:09:34.682628 kubelet[2807]: E0416 01:09:34.678500 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.709965 kubelet[2807]: E0416 01:09:34.705469 2807 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78d845779-8pjtj" Apr 16 01:09:34.709965 kubelet[2807]: E0416 01:09:34.706073 2807 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78d845779-8pjtj" Apr 16 01:09:34.709965 kubelet[2807]: E0416 01:09:34.706635 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78d845779-8pjtj_calico-system(4f5df53a-941c-45fc-a689-430c9f635b42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78d845779-8pjtj_calico-system(4f5df53a-941c-45fc-a689-430c9f635b42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78d845779-8pjtj" podUID="4f5df53a-941c-45fc-a689-430c9f635b42" Apr 16 01:09:34.824978 containerd[1593]: time="2026-04-16T01:09:34.824870858Z" level=error msg="Failed to destroy network for sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.848969 containerd[1593]: time="2026-04-16T01:09:34.848339833Z" level=error msg="encountered an error cleaning up failed sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.848969 containerd[1593]: time="2026-04-16T01:09:34.848530111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bd47d56c-kk7cd,Uid:091fc483-3bbd-4649-ab92-475b732c9825,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.849750 kubelet[2807]: E0416 01:09:34.849546 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.850150 kubelet[2807]: E0416 01:09:34.849812 2807 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-68bd47d56c-kk7cd" Apr 16 01:09:34.850150 kubelet[2807]: E0416 01:09:34.849886 2807 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-68bd47d56c-kk7cd" Apr 16 01:09:34.850556 kubelet[2807]: E0416 01:09:34.850322 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68bd47d56c-kk7cd_calico-system(091fc483-3bbd-4649-ab92-475b732c9825)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68bd47d56c-kk7cd_calico-system(091fc483-3bbd-4649-ab92-475b732c9825)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-68bd47d56c-kk7cd" podUID="091fc483-3bbd-4649-ab92-475b732c9825" Apr 16 01:09:34.876540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085-shm.mount: Deactivated successfully. Apr 16 01:09:34.907654 containerd[1593]: time="2026-04-16T01:09:34.906897425Z" level=error msg="Failed to destroy network for sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.944348 containerd[1593]: time="2026-04-16T01:09:34.940833070Z" level=error msg="encountered an error cleaning up failed sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.944348 containerd[1593]: time="2026-04-16T01:09:34.940946634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bd47d56c-vlgfc,Uid:0a26e0b5-baae-47de-8478-3a9191a4d5e8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.957845 kubelet[2807]: E0416 01:09:34.941105 2807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:34.957845 kubelet[2807]: E0416 01:09:34.941334 2807 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-68bd47d56c-vlgfc" Apr 16 01:09:34.957845 kubelet[2807]: E0416 01:09:34.941779 2807 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-68bd47d56c-vlgfc" Apr 16 01:09:34.958196 kubelet[2807]: E0416 01:09:34.941954 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68bd47d56c-vlgfc_calico-system(0a26e0b5-baae-47de-8478-3a9191a4d5e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68bd47d56c-vlgfc_calico-system(0a26e0b5-baae-47de-8478-3a9191a4d5e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-68bd47d56c-vlgfc" podUID="0a26e0b5-baae-47de-8478-3a9191a4d5e8" Apr 16 01:09:34.958196 kubelet[2807]: I0416 01:09:34.942853 2807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:09:35.022113 kubelet[2807]: I0416 01:09:35.016761 2807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:09:35.022839 containerd[1593]: time="2026-04-16T01:09:35.017930666Z" level=info msg="StopPodSandbox for \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\"" Apr 16 01:09:35.036495 kubelet[2807]: I0416 01:09:35.036108 2807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:09:35.038915 containerd[1593]: time="2026-04-16T01:09:35.037816838Z" level=info msg="Ensure that sandbox af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0 in task-service has been cleanup successfully" Apr 16 01:09:35.040736 containerd[1593]: time="2026-04-16T01:09:35.038728487Z" level=info msg="StopPodSandbox for \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\"" Apr 16 01:09:35.049152 containerd[1593]: time="2026-04-16T01:09:35.048861445Z" level=info msg="Ensure that sandbox 0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d in task-service has been cleanup successfully" Apr 16 01:09:35.068794 kubelet[2807]: I0416 01:09:35.066719 2807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:09:35.069028 containerd[1593]: time="2026-04-16T01:09:35.068853461Z" level=info msg="StopPodSandbox for \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\"" Apr 16 01:09:35.069281 containerd[1593]: time="2026-04-16T01:09:35.069135520Z" level=info msg="Ensure that sandbox c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8 in task-service has been cleanup successfully" Apr 16 01:09:35.070487 containerd[1593]: time="2026-04-16T01:09:35.070465377Z" level=info msg="StopPodSandbox for \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\"" Apr 16 01:09:35.100739 containerd[1593]: time="2026-04-16T01:09:35.095076385Z" level=info msg="Ensure that sandbox e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9 in task-service has been cleanup successfully" Apr 16 01:09:35.340764 kubelet[2807]: I0416 01:09:35.337382 2807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:09:35.406141 containerd[1593]: time="2026-04-16T01:09:35.406097496Z" level=info msg="StopPodSandbox for \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\"" Apr 16 01:09:35.407365 containerd[1593]: time="2026-04-16T01:09:35.407197653Z" level=info msg="Ensure that sandbox 11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085 in task-service has been cleanup successfully" Apr 16 01:09:35.447185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122-shm.mount: Deactivated successfully. Apr 16 01:09:35.649044 kubelet[2807]: I0416 01:09:35.646479 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l9vc6" podStartSLOduration=16.167904117 podStartE2EDuration="57.64646508s" podCreationTimestamp="2026-04-16 01:08:38 +0000 UTC" firstStartedPulling="2026-04-16 01:08:40.786101463 +0000 UTC m=+69.457531417" lastFinishedPulling="2026-04-16 01:09:22.264662438 +0000 UTC m=+110.936092380" observedRunningTime="2026-04-16 01:09:35.646139943 +0000 UTC m=+124.317569893" watchObservedRunningTime="2026-04-16 01:09:35.64646508 +0000 UTC m=+124.317895033" Apr 16 01:09:35.707106 kubelet[2807]: I0416 01:09:35.698669 2807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:09:35.783867 containerd[1593]: time="2026-04-16T01:09:35.738557563Z" level=info msg="StopPodSandbox for \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\"" Apr 16 01:09:35.805874 containerd[1593]: time="2026-04-16T01:09:35.805775980Z" level=info msg="Ensure that sandbox d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff in task-service has been cleanup successfully" Apr 16 01:09:35.833784 kubelet[2807]: I0416 01:09:35.831708 2807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:09:35.875007 containerd[1593]: time="2026-04-16T01:09:35.873361928Z" level=info msg="StopPodSandbox for \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\"" Apr 16 01:09:35.875007 containerd[1593]: time="2026-04-16T01:09:35.873779502Z" level=info msg="Ensure that sandbox 6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5 in task-service has been cleanup successfully" Apr 16 01:09:36.077742 containerd[1593]: time="2026-04-16T01:09:36.077438109Z" level=error msg="StopPodSandbox for \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\" failed" error="failed to destroy network for sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:36.082958 systemd[1]: run-containerd-runc-k8s.io-d6f52d8be2b846cdcf775bd4415ef52b0cac2f3db695926f13fddac5d13129d6-runc.VZ7tQl.mount: Deactivated successfully. Apr 16 01:09:36.085107 kubelet[2807]: E0416 01:09:36.084746 2807 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:09:36.085107 kubelet[2807]: E0416 01:09:36.084886 2807 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9"} Apr 16 01:09:36.085107 kubelet[2807]: E0416 01:09:36.084946 2807 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f5df53a-941c-45fc-a689-430c9f635b42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:09:36.085107 kubelet[2807]: E0416 01:09:36.084974 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f5df53a-941c-45fc-a689-430c9f635b42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78d845779-8pjtj" podUID="4f5df53a-941c-45fc-a689-430c9f635b42" Apr 16 01:09:36.088434 containerd[1593]: time="2026-04-16T01:09:36.088208923Z" level=error msg="StopPodSandbox for \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\" failed" error="failed to destroy network for sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:36.089210 kubelet[2807]: E0416 01:09:36.089046 2807 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:09:36.089210 kubelet[2807]: E0416 01:09:36.089144 2807 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8"} Apr 16 01:09:36.089210 kubelet[2807]: E0416 01:09:36.089168 2807 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b148b156-4c3c-440d-9a9c-de6e9bd705a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:09:36.089210 kubelet[2807]: E0416 01:09:36.089186 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b148b156-4c3c-440d-9a9c-de6e9bd705a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wt9ng" podUID="b148b156-4c3c-440d-9a9c-de6e9bd705a3" Apr 16 01:09:36.096492 containerd[1593]: time="2026-04-16T01:09:36.096315856Z" level=error msg="StopPodSandbox for \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\" failed" error="failed to destroy network for sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:36.099175 kubelet[2807]: E0416 01:09:36.097998 2807 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:09:36.105515 kubelet[2807]: E0416 01:09:36.104114 2807 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0"} Apr 16 01:09:36.105515 kubelet[2807]: E0416 01:09:36.104498 2807 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aebb0dae-448b-478a-a00a-811005b5982c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:09:36.105515 kubelet[2807]: E0416 01:09:36.104529 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aebb0dae-448b-478a-a00a-811005b5982c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c4f75b597-sfg9g" podUID="aebb0dae-448b-478a-a00a-811005b5982c" Apr 16 01:09:36.169710 containerd[1593]: time="2026-04-16T01:09:36.169366486Z" level=error msg="StopPodSandbox for \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\" failed" error="failed to destroy network for sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:36.171867 kubelet[2807]: E0416 01:09:36.171674 2807 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:09:36.171867 kubelet[2807]: E0416 01:09:36.171725 2807 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d"} Apr 16 01:09:36.171867 kubelet[2807]: E0416 01:09:36.171818 2807 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e305132-072a-4841-9d59-183ab9643f4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:09:36.171867 kubelet[2807]: E0416 01:09:36.171838 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e305132-072a-4841-9d59-183ab9643f4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-28mcm" podUID="9e305132-072a-4841-9d59-183ab9643f4e" Apr 16 01:09:36.222161 containerd[1593]: time="2026-04-16T01:09:36.221434751Z" level=error msg="StopPodSandbox for \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\" failed" error="failed to destroy network for sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:36.231061 kubelet[2807]: E0416 01:09:36.230516 2807 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:09:36.231061 kubelet[2807]: E0416 01:09:36.230665 2807 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085"} Apr 16 01:09:36.231061 kubelet[2807]: E0416 01:09:36.230791 2807 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"091fc483-3bbd-4649-ab92-475b732c9825\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:09:36.231061 kubelet[2807]: E0416 01:09:36.230825 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"091fc483-3bbd-4649-ab92-475b732c9825\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-68bd47d56c-kk7cd" podUID="091fc483-3bbd-4649-ab92-475b732c9825" Apr 16 01:09:36.305685 containerd[1593]: time="2026-04-16T01:09:36.304407381Z" level=error msg="StopPodSandbox for \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\" failed" error="failed to destroy network for sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:36.307655 kubelet[2807]: E0416 01:09:36.307211 2807 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:09:36.313547 kubelet[2807]: E0416 01:09:36.311539 2807 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5"} Apr 16 01:09:36.321444 kubelet[2807]: E0416 01:09:36.319945 2807 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73d74924-8e40-46ed-8ff0-31c0cdbb144c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:09:36.322208 kubelet[2807]: E0416 01:09:36.320969 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73d74924-8e40-46ed-8ff0-31c0cdbb144c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gqrfc" podUID="73d74924-8e40-46ed-8ff0-31c0cdbb144c" Apr 16 01:09:36.342366 containerd[1593]: time="2026-04-16T01:09:36.332182732Z" level=error msg="StopPodSandbox for \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\" failed" error="failed to destroy network for sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:36.348672 kubelet[2807]: E0416 01:09:36.346922 2807 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:09:36.348672 kubelet[2807]: E0416 01:09:36.347983 2807 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff"} Apr 16 01:09:36.349034 kubelet[2807]: E0416 01:09:36.349012 2807 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:09:36.350804 kubelet[2807]: E0416 01:09:36.350659 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-gmdnv" podUID="0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe" Apr 16 01:09:36.873690 kubelet[2807]: I0416 01:09:36.873489 2807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:09:36.889183 containerd[1593]: time="2026-04-16T01:09:36.888948448Z" level=info msg="StopPodSandbox for \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\"" Apr 16 01:09:36.893533 containerd[1593]: time="2026-04-16T01:09:36.891068649Z" level=info msg="Ensure that sandbox 7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122 in task-service has been cleanup successfully" Apr 16 01:09:37.020138 containerd[1593]: time="2026-04-16T01:09:37.020076023Z" level=error msg="StopPodSandbox for \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\" failed" error="failed to destroy network for sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:09:37.020998 kubelet[2807]: E0416 01:09:37.020962 2807 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:09:37.021941 kubelet[2807]: E0416 01:09:37.021811 2807 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122"} Apr 16 01:09:37.022054 kubelet[2807]: E0416 01:09:37.022039 2807 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a26e0b5-baae-47de-8478-3a9191a4d5e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:09:37.022445 kubelet[2807]: E0416 01:09:37.022381 2807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a26e0b5-baae-47de-8478-3a9191a4d5e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-68bd47d56c-vlgfc" podUID="0a26e0b5-baae-47de-8478-3a9191a4d5e8" Apr 16 01:09:37.437747 containerd[1593]: time="2026-04-16T01:09:37.437401144Z" level=info msg="StopPodSandbox for \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\"" Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:37.945 [INFO][4316] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:37.946 [INFO][4316] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" iface="eth0" netns="/var/run/netns/cni-e6183237-75fd-6118-4b54-b9b5b5c3deae" Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:37.947 [INFO][4316] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" iface="eth0" netns="/var/run/netns/cni-e6183237-75fd-6118-4b54-b9b5b5c3deae" Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:37.958 [INFO][4316] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" iface="eth0" netns="/var/run/netns/cni-e6183237-75fd-6118-4b54-b9b5b5c3deae" Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:37.958 [INFO][4316] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:37.960 [INFO][4316] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:38.079 [INFO][4333] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" HandleID="k8s-pod-network.e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Workload="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:38.080 [INFO][4333] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:38.080 [INFO][4333] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:38.088 [WARNING][4333] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" HandleID="k8s-pod-network.e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Workload="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:38.088 [INFO][4333] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" HandleID="k8s-pod-network.e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Workload="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:38.091 [INFO][4333] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:38.123070 containerd[1593]: 2026-04-16 01:09:38.115 [INFO][4316] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:09:38.132204 containerd[1593]: time="2026-04-16T01:09:38.131634030Z" level=info msg="TearDown network for sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\" successfully" Apr 16 01:09:38.133689 systemd[1]: run-netns-cni\x2de6183237\x2d75fd\x2d6118\x2d4b54\x2db9b5b5c3deae.mount: Deactivated successfully. Apr 16 01:09:38.136900 containerd[1593]: time="2026-04-16T01:09:38.134788976Z" level=info msg="StopPodSandbox for \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\" returns successfully" Apr 16 01:09:38.266469 kubelet[2807]: I0416 01:09:38.265401 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47rfw\" (UniqueName: \"kubernetes.io/projected/4f5df53a-941c-45fc-a689-430c9f635b42-kube-api-access-47rfw\") pod \"4f5df53a-941c-45fc-a689-430c9f635b42\" (UID: \"4f5df53a-941c-45fc-a689-430c9f635b42\") " Apr 16 01:09:38.266469 kubelet[2807]: I0416 01:09:38.265614 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4f5df53a-941c-45fc-a689-430c9f635b42-nginx-config\") pod \"4f5df53a-941c-45fc-a689-430c9f635b42\" (UID: \"4f5df53a-941c-45fc-a689-430c9f635b42\") " Apr 16 01:09:38.266469 kubelet[2807]: I0416 01:09:38.265657 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f5df53a-941c-45fc-a689-430c9f635b42-whisker-ca-bundle\") pod \"4f5df53a-941c-45fc-a689-430c9f635b42\" (UID: \"4f5df53a-941c-45fc-a689-430c9f635b42\") " Apr 16 01:09:38.266469 kubelet[2807]: I0416 01:09:38.265741 2807 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f5df53a-941c-45fc-a689-430c9f635b42-whisker-backend-key-pair\") pod \"4f5df53a-941c-45fc-a689-430c9f635b42\" (UID: \"4f5df53a-941c-45fc-a689-430c9f635b42\") " Apr 16 01:09:38.268336 kubelet[2807]: I0416 01:09:38.267212 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f5df53a-941c-45fc-a689-430c9f635b42-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4f5df53a-941c-45fc-a689-430c9f635b42" (UID: "4f5df53a-941c-45fc-a689-430c9f635b42"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 01:09:38.268336 kubelet[2807]: I0416 01:09:38.267614 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f5df53a-941c-45fc-a689-430c9f635b42-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "4f5df53a-941c-45fc-a689-430c9f635b42" (UID: "4f5df53a-941c-45fc-a689-430c9f635b42"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 01:09:38.290457 kubelet[2807]: I0416 01:09:38.289623 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5df53a-941c-45fc-a689-430c9f635b42-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4f5df53a-941c-45fc-a689-430c9f635b42" (UID: "4f5df53a-941c-45fc-a689-430c9f635b42"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 01:09:38.299048 systemd[1]: var-lib-kubelet-pods-4f5df53a\x2d941c\x2d45fc\x2da689\x2d430c9f635b42-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 01:09:38.302648 kubelet[2807]: I0416 01:09:38.302417 2807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5df53a-941c-45fc-a689-430c9f635b42-kube-api-access-47rfw" (OuterVolumeSpecName: "kube-api-access-47rfw") pod "4f5df53a-941c-45fc-a689-430c9f635b42" (UID: "4f5df53a-941c-45fc-a689-430c9f635b42"). InnerVolumeSpecName "kube-api-access-47rfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 01:09:38.303731 systemd[1]: var-lib-kubelet-pods-4f5df53a\x2d941c\x2d45fc\x2da689\x2d430c9f635b42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d47rfw.mount: Deactivated successfully. Apr 16 01:09:38.368488 kubelet[2807]: I0416 01:09:38.367989 2807 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f5df53a-941c-45fc-a689-430c9f635b42-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 16 01:09:38.368488 kubelet[2807]: I0416 01:09:38.368334 2807 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47rfw\" (UniqueName: \"kubernetes.io/projected/4f5df53a-941c-45fc-a689-430c9f635b42-kube-api-access-47rfw\") on node \"localhost\" DevicePath \"\"" Apr 16 01:09:38.368488 kubelet[2807]: I0416 01:09:38.368385 2807 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4f5df53a-941c-45fc-a689-430c9f635b42-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 16 01:09:38.368488 kubelet[2807]: I0416 01:09:38.368396 2807 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f5df53a-941c-45fc-a689-430c9f635b42-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 16 01:09:39.622136 kubelet[2807]: I0416 01:09:39.621704 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4ea63ba6-372f-4274-9779-46e2f07d1b48-nginx-config\") pod \"whisker-66f545dd68-m4ktl\" (UID: \"4ea63ba6-372f-4274-9779-46e2f07d1b48\") " pod="calico-system/whisker-66f545dd68-m4ktl" Apr 16 01:09:39.622136 kubelet[2807]: I0416 01:09:39.621803 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ea63ba6-372f-4274-9779-46e2f07d1b48-whisker-ca-bundle\") pod \"whisker-66f545dd68-m4ktl\" (UID: \"4ea63ba6-372f-4274-9779-46e2f07d1b48\") " pod="calico-system/whisker-66f545dd68-m4ktl" Apr 16 01:09:39.622136 kubelet[2807]: I0416 01:09:39.621823 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4ea63ba6-372f-4274-9779-46e2f07d1b48-whisker-backend-key-pair\") pod \"whisker-66f545dd68-m4ktl\" (UID: \"4ea63ba6-372f-4274-9779-46e2f07d1b48\") " pod="calico-system/whisker-66f545dd68-m4ktl" Apr 16 01:09:39.622136 kubelet[2807]: I0416 01:09:39.621836 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gsc\" (UniqueName: \"kubernetes.io/projected/4ea63ba6-372f-4274-9779-46e2f07d1b48-kube-api-access-k7gsc\") pod \"whisker-66f545dd68-m4ktl\" (UID: \"4ea63ba6-372f-4274-9779-46e2f07d1b48\") " pod="calico-system/whisker-66f545dd68-m4ktl" Apr 16 01:09:39.941454 containerd[1593]: time="2026-04-16T01:09:39.940027265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f545dd68-m4ktl,Uid:4ea63ba6-372f-4274-9779-46e2f07d1b48,Namespace:calico-system,Attempt:0,}" Apr 16 01:09:40.605669 kubelet[2807]: I0416 01:09:40.604785 2807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5df53a-941c-45fc-a689-430c9f635b42" path="/var/lib/kubelet/pods/4f5df53a-941c-45fc-a689-430c9f635b42/volumes" Apr 16 01:09:41.429792 systemd-networkd[1244]: cali22fc1073791: Link UP Apr 16 01:09:41.429975 systemd-networkd[1244]: cali22fc1073791: Gained carrier Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:40.139 [ERROR][4357] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:40.301 [INFO][4357] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--66f545dd68--m4ktl-eth0 whisker-66f545dd68- calico-system 4ea63ba6-372f-4274-9779-46e2f07d1b48 1207 0 2026-04-16 01:09:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66f545dd68 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-66f545dd68-m4ktl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali22fc1073791 [] [] }} ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Namespace="calico-system" Pod="whisker-66f545dd68-m4ktl" WorkloadEndpoint="localhost-k8s-whisker--66f545dd68--m4ktl-" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:40.302 [INFO][4357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Namespace="calico-system" Pod="whisker-66f545dd68-m4ktl" WorkloadEndpoint="localhost-k8s-whisker--66f545dd68--m4ktl-eth0" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:40.585 [INFO][4389] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" HandleID="k8s-pod-network.6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Workload="localhost-k8s-whisker--66f545dd68--m4ktl-eth0" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:40.668 [INFO][4389] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" HandleID="k8s-pod-network.6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Workload="localhost-k8s-whisker--66f545dd68--m4ktl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027a1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-66f545dd68-m4ktl", "timestamp":"2026-04-16 01:09:40.585050368 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00033e000)} Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:40.671 [INFO][4389] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:40.688 [INFO][4389] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:40.688 [INFO][4389] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:40.720 [INFO][4389] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" host="localhost" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:40.906 [INFO][4389] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:41.012 [INFO][4389] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:41.085 [INFO][4389] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:41.120 [INFO][4389] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:41.120 [INFO][4389] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" host="localhost" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:41.161 [INFO][4389] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20 Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:41.242 [INFO][4389] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" host="localhost" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:41.335 [INFO][4389] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" host="localhost" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:41.346 [INFO][4389] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" host="localhost" Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:41.346 [INFO][4389] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:41.720443 containerd[1593]: 2026-04-16 01:09:41.346 [INFO][4389] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" HandleID="k8s-pod-network.6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Workload="localhost-k8s-whisker--66f545dd68--m4ktl-eth0" Apr 16 01:09:41.747405 containerd[1593]: 2026-04-16 01:09:41.368 [INFO][4357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Namespace="calico-system" Pod="whisker-66f545dd68-m4ktl" WorkloadEndpoint="localhost-k8s-whisker--66f545dd68--m4ktl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66f545dd68--m4ktl-eth0", GenerateName:"whisker-66f545dd68-", Namespace:"calico-system", SelfLink:"", UID:"4ea63ba6-372f-4274-9779-46e2f07d1b48", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f545dd68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-66f545dd68-m4ktl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22fc1073791", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:41.747405 containerd[1593]: 2026-04-16 01:09:41.378 [INFO][4357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Namespace="calico-system" Pod="whisker-66f545dd68-m4ktl" WorkloadEndpoint="localhost-k8s-whisker--66f545dd68--m4ktl-eth0" Apr 16 01:09:41.747405 containerd[1593]: 2026-04-16 01:09:41.378 [INFO][4357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22fc1073791 ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Namespace="calico-system" Pod="whisker-66f545dd68-m4ktl" WorkloadEndpoint="localhost-k8s-whisker--66f545dd68--m4ktl-eth0" Apr 16 01:09:41.747405 containerd[1593]: 2026-04-16 01:09:41.436 [INFO][4357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Namespace="calico-system" Pod="whisker-66f545dd68-m4ktl" WorkloadEndpoint="localhost-k8s-whisker--66f545dd68--m4ktl-eth0" Apr 16 01:09:41.747405 containerd[1593]: 2026-04-16 01:09:41.437 [INFO][4357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Namespace="calico-system" Pod="whisker-66f545dd68-m4ktl" WorkloadEndpoint="localhost-k8s-whisker--66f545dd68--m4ktl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66f545dd68--m4ktl-eth0", GenerateName:"whisker-66f545dd68-", Namespace:"calico-system", SelfLink:"", UID:"4ea63ba6-372f-4274-9779-46e2f07d1b48", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 9, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f545dd68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20", Pod:"whisker-66f545dd68-m4ktl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22fc1073791", MAC:"fe:11:63:f2:13:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:41.747405 containerd[1593]: 2026-04-16 01:09:41.650 [INFO][4357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20" Namespace="calico-system" Pod="whisker-66f545dd68-m4ktl" WorkloadEndpoint="localhost-k8s-whisker--66f545dd68--m4ktl-eth0" Apr 16 01:09:41.862817 containerd[1593]: time="2026-04-16T01:09:41.860944932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:09:41.862817 containerd[1593]: time="2026-04-16T01:09:41.861077581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:09:41.862817 containerd[1593]: time="2026-04-16T01:09:41.861092407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:41.862817 containerd[1593]: time="2026-04-16T01:09:41.861198166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:42.233857 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:09:42.514487 containerd[1593]: time="2026-04-16T01:09:42.511868305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f545dd68-m4ktl,Uid:4ea63ba6-372f-4274-9779-46e2f07d1b48,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20\"" Apr 16 01:09:42.535103 containerd[1593]: time="2026-04-16T01:09:42.529860471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 01:09:42.798760 systemd-networkd[1244]: cali22fc1073791: Gained IPv6LL Apr 16 01:09:42.980646 kernel: calico-node[4384]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 16 01:09:44.110007 systemd-networkd[1244]: vxlan.calico: Link UP Apr 16 01:09:44.118595 systemd-networkd[1244]: vxlan.calico: Gained carrier Apr 16 01:09:44.597639 containerd[1593]: time="2026-04-16T01:09:44.596381855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:44.607807 containerd[1593]: time="2026-04-16T01:09:44.599157918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 16 01:09:44.607807 containerd[1593]: time="2026-04-16T01:09:44.600841813Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:44.607807 containerd[1593]: time="2026-04-16T01:09:44.606961210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:44.608171 containerd[1593]: time="2026-04-16T01:09:44.607906022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.077970223s" Apr 16 01:09:44.608171 containerd[1593]: time="2026-04-16T01:09:44.607931752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 16 01:09:44.641843 containerd[1593]: time="2026-04-16T01:09:44.640806322Z" level=info msg="CreateContainer within sandbox \"6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 01:09:44.720633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850234944.mount: Deactivated successfully. Apr 16 01:09:44.730663 containerd[1593]: time="2026-04-16T01:09:44.730425795Z" level=info msg="CreateContainer within sandbox \"6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b82855d6207dc00b86523ab52a15e69ca8535ed85528a5cee3489bc80a678d89\"" Apr 16 01:09:44.732568 containerd[1593]: time="2026-04-16T01:09:44.732492089Z" level=info msg="StartContainer for \"b82855d6207dc00b86523ab52a15e69ca8535ed85528a5cee3489bc80a678d89\"" Apr 16 01:09:45.021106 containerd[1593]: time="2026-04-16T01:09:45.018642114Z" level=info msg="StartContainer for \"b82855d6207dc00b86523ab52a15e69ca8535ed85528a5cee3489bc80a678d89\" returns successfully" Apr 16 01:09:45.034018 containerd[1593]: time="2026-04-16T01:09:45.033923421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 01:09:45.296743 systemd-networkd[1244]: vxlan.calico: Gained IPv6LL Apr 16 01:09:47.546888 containerd[1593]: time="2026-04-16T01:09:47.546660587Z" level=info msg="StopPodSandbox for \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\"" Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.024 [INFO][4723] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.043 [INFO][4723] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" iface="eth0" netns="/var/run/netns/cni-87a6fe00-9eaa-0c73-d4a4-fe9a81e75380" Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.045 [INFO][4723] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" iface="eth0" netns="/var/run/netns/cni-87a6fe00-9eaa-0c73-d4a4-fe9a81e75380" Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.046 [INFO][4723] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" iface="eth0" netns="/var/run/netns/cni-87a6fe00-9eaa-0c73-d4a4-fe9a81e75380" Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.046 [INFO][4723] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.046 [INFO][4723] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.186 [INFO][4732] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" HandleID="k8s-pod-network.0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.186 [INFO][4732] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.187 [INFO][4732] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.268 [WARNING][4732] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" HandleID="k8s-pod-network.0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.269 [INFO][4732] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" HandleID="k8s-pod-network.0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.297 [INFO][4732] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:48.318421 containerd[1593]: 2026-04-16 01:09:48.307 [INFO][4723] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:09:48.320893 containerd[1593]: time="2026-04-16T01:09:48.320419720Z" level=info msg="TearDown network for sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\" successfully" Apr 16 01:09:48.320893 containerd[1593]: time="2026-04-16T01:09:48.320447253Z" level=info msg="StopPodSandbox for \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\" returns successfully" Apr 16 01:09:48.320839 systemd[1]: run-netns-cni\x2d87a6fe00\x2d9eaa\x2d0c73\x2dd4a4\x2dfe9a81e75380.mount: Deactivated successfully. Apr 16 01:09:48.321739 kubelet[2807]: E0416 01:09:48.321068 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:48.323681 containerd[1593]: time="2026-04-16T01:09:48.322476934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-28mcm,Uid:9e305132-072a-4841-9d59-183ab9643f4e,Namespace:kube-system,Attempt:1,}" Apr 16 01:09:48.418828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888180343.mount: Deactivated successfully. Apr 16 01:09:48.531988 containerd[1593]: time="2026-04-16T01:09:48.531171643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:48.532990 containerd[1593]: time="2026-04-16T01:09:48.532681374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 16 01:09:48.539803 containerd[1593]: time="2026-04-16T01:09:48.539147806Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:48.554013 containerd[1593]: time="2026-04-16T01:09:48.553067310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:09:48.559631 containerd[1593]: time="2026-04-16T01:09:48.559420337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.524780797s" Apr 16 01:09:48.559631 containerd[1593]: time="2026-04-16T01:09:48.559563546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 16 01:09:48.571644 containerd[1593]: time="2026-04-16T01:09:48.571026974Z" level=info msg="CreateContainer within sandbox \"6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 01:09:48.641924 containerd[1593]: time="2026-04-16T01:09:48.641176616Z" level=info msg="CreateContainer within sandbox \"6d9341718c8cbdbcbc7b573ace0a22385a0180bde2c41335b0b7665e0d56ed20\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"790619d10767c08a600bdf09d93bfdd698c8746399cd176c3778bf8a6aa822f5\"" Apr 16 01:09:48.671146 containerd[1593]: time="2026-04-16T01:09:48.669687837Z" level=info msg="StartContainer for \"790619d10767c08a600bdf09d93bfdd698c8746399cd176c3778bf8a6aa822f5\"" Apr 16 01:09:49.166387 containerd[1593]: time="2026-04-16T01:09:49.165948304Z" level=info msg="StartContainer for \"790619d10767c08a600bdf09d93bfdd698c8746399cd176c3778bf8a6aa822f5\" returns successfully" Apr 16 01:09:49.683938 containerd[1593]: time="2026-04-16T01:09:49.683748714Z" level=info msg="StopPodSandbox for \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\"" Apr 16 01:09:49.709672 containerd[1593]: time="2026-04-16T01:09:49.705835000Z" level=info msg="StopPodSandbox for \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\"" Apr 16 01:09:49.818947 kubelet[2807]: I0416 01:09:49.817994 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-66f545dd68-m4ktl" podStartSLOduration=4.776815212 podStartE2EDuration="10.817973265s" podCreationTimestamp="2026-04-16 01:09:39 +0000 UTC" firstStartedPulling="2026-04-16 01:09:42.522211774 +0000 UTC m=+131.193641716" lastFinishedPulling="2026-04-16 01:09:48.563369819 +0000 UTC m=+137.234799769" observedRunningTime="2026-04-16 01:09:49.813045792 +0000 UTC m=+138.484475748" watchObservedRunningTime="2026-04-16 01:09:49.817973265 +0000 UTC m=+138.489403245" Apr 16 01:09:50.525797 systemd-networkd[1244]: cali94d27047b8e: Link UP Apr 16 01:09:50.527764 systemd-networkd[1244]: cali94d27047b8e: Gained carrier Apr 16 01:09:50.967741 containerd[1593]: time="2026-04-16T01:09:50.963392895Z" level=info msg="StopPodSandbox for \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\"" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:48.599 [INFO][4739] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--28mcm-eth0 coredns-674b8bbfcf- kube-system 9e305132-072a-4841-9d59-183ab9643f4e 1234 0 2026-04-16 01:07:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-28mcm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali94d27047b8e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Namespace="kube-system" Pod="coredns-674b8bbfcf-28mcm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--28mcm-" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:48.603 [INFO][4739] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Namespace="kube-system" Pod="coredns-674b8bbfcf-28mcm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:48.883 [INFO][4767] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" HandleID="k8s-pod-network.5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:48.929 [INFO][4767] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" HandleID="k8s-pod-network.5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e830), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-28mcm", "timestamp":"2026-04-16 01:09:48.883965765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000216dc0)} Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:48.930 [INFO][4767] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:48.930 [INFO][4767] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:48.930 [INFO][4767] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:49.059 [INFO][4767] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" host="localhost" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:49.169 [INFO][4767] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:49.616 [INFO][4767] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:49.775 [INFO][4767] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:49.926 [INFO][4767] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:49.928 [INFO][4767] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" host="localhost" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:49.946 [INFO][4767] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55 Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:50.070 [INFO][4767] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" host="localhost" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:50.172 [INFO][4767] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" host="localhost" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:50.189 [INFO][4767] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" host="localhost" Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:50.194 [INFO][4767] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:50.967741 containerd[1593]: 2026-04-16 01:09:50.204 [INFO][4767] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" HandleID="k8s-pod-network.5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:09:51.033781 containerd[1593]: 2026-04-16 01:09:50.392 [INFO][4739] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Namespace="kube-system" Pod="coredns-674b8bbfcf-28mcm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--28mcm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9e305132-072a-4841-9d59-183ab9643f4e", ResourceVersion:"1234", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 7, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-28mcm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94d27047b8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:51.033781 containerd[1593]: 2026-04-16 01:09:50.393 [INFO][4739] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Namespace="kube-system" Pod="coredns-674b8bbfcf-28mcm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:09:51.033781 containerd[1593]: 2026-04-16 01:09:50.393 [INFO][4739] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94d27047b8e ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Namespace="kube-system" Pod="coredns-674b8bbfcf-28mcm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:09:51.033781 containerd[1593]: 2026-04-16 01:09:50.534 [INFO][4739] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Namespace="kube-system" Pod="coredns-674b8bbfcf-28mcm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:09:51.033781 containerd[1593]: 2026-04-16 01:09:50.618 [INFO][4739] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Namespace="kube-system" Pod="coredns-674b8bbfcf-28mcm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--28mcm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9e305132-072a-4841-9d59-183ab9643f4e", ResourceVersion:"1234", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 7, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55", Pod:"coredns-674b8bbfcf-28mcm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94d27047b8e", MAC:"02:8d:61:2b:86:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:51.033781 containerd[1593]: 2026-04-16 01:09:50.910 [INFO][4739] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55" Namespace="kube-system" Pod="coredns-674b8bbfcf-28mcm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:09:51.033781 containerd[1593]: time="2026-04-16T01:09:50.972098091Z" level=info msg="StopPodSandbox for \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\"" Apr 16 01:09:51.109422 containerd[1593]: time="2026-04-16T01:09:51.102691202Z" level=info msg="StopPodSandbox for \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\"" Apr 16 01:09:51.600785 containerd[1593]: time="2026-04-16T01:09:51.600384915Z" level=info msg="StopPodSandbox for \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\"" Apr 16 01:09:51.950973 systemd-networkd[1244]: cali94d27047b8e: Gained IPv6LL Apr 16 01:09:52.008668 containerd[1593]: time="2026-04-16T01:09:52.007645013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:09:52.026370 containerd[1593]: time="2026-04-16T01:09:52.020822126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:09:52.026370 containerd[1593]: time="2026-04-16T01:09:52.020881169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:52.026370 containerd[1593]: time="2026-04-16T01:09:52.021038898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:50.534 [INFO][4829] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:50.534 [INFO][4829] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" iface="eth0" netns="/var/run/netns/cni-83009f30-41a6-d80a-7c23-0bdd345eeaa0" Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:50.535 [INFO][4829] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" iface="eth0" netns="/var/run/netns/cni-83009f30-41a6-d80a-7c23-0bdd345eeaa0" Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:50.624 [INFO][4829] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" iface="eth0" netns="/var/run/netns/cni-83009f30-41a6-d80a-7c23-0bdd345eeaa0" Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:50.649 [INFO][4829] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:50.649 [INFO][4829] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:52.058 [INFO][4869] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" HandleID="k8s-pod-network.7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:52.064 [INFO][4869] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:52.065 [INFO][4869] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:52.116 [WARNING][4869] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" HandleID="k8s-pod-network.7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:52.144 [INFO][4869] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" HandleID="k8s-pod-network.7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:52.312 [INFO][4869] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:52.447096 containerd[1593]: 2026-04-16 01:09:52.400 [INFO][4829] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:09:52.482727 containerd[1593]: time="2026-04-16T01:09:52.481205200Z" level=info msg="TearDown network for sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\" successfully" Apr 16 01:09:52.482727 containerd[1593]: time="2026-04-16T01:09:52.481437528Z" level=info msg="StopPodSandbox for \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\" returns successfully" Apr 16 01:09:52.501468 systemd[1]: run-netns-cni\x2d83009f30\x2d41a6\x2dd80a\x2d7c23\x2d0bdd345eeaa0.mount: Deactivated successfully. Apr 16 01:09:52.524382 containerd[1593]: time="2026-04-16T01:09:52.522019675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bd47d56c-vlgfc,Uid:0a26e0b5-baae-47de-8478-3a9191a4d5e8,Namespace:calico-system,Attempt:1,}" Apr 16 01:09:52.584069 systemd[1]: run-containerd-runc-k8s.io-5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55-runc.Km6mYb.mount: Deactivated successfully. Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:50.496 [INFO][4828] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:50.499 [INFO][4828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" iface="eth0" netns="/var/run/netns/cni-15c7490f-6289-c73e-7c16-993cca2af78b" Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:50.499 [INFO][4828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" iface="eth0" netns="/var/run/netns/cni-15c7490f-6289-c73e-7c16-993cca2af78b" Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:50.519 [INFO][4828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" iface="eth0" netns="/var/run/netns/cni-15c7490f-6289-c73e-7c16-993cca2af78b" Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:50.519 [INFO][4828] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:50.519 [INFO][4828] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:52.174 [INFO][4856] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" HandleID="k8s-pod-network.af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:52.175 [INFO][4856] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:52.326 [INFO][4856] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:52.482 [WARNING][4856] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" HandleID="k8s-pod-network.af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:52.484 [INFO][4856] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" HandleID="k8s-pod-network.af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:52.548 [INFO][4856] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:52.783859 containerd[1593]: 2026-04-16 01:09:52.621 [INFO][4828] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:09:52.890138 systemd[1]: run-netns-cni\x2d15c7490f\x2d6289\x2dc73e\x2d7c16\x2d993cca2af78b.mount: Deactivated successfully. Apr 16 01:09:52.900831 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:09:53.263818 containerd[1593]: time="2026-04-16T01:09:53.263447051Z" level=info msg="TearDown network for sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\" successfully" Apr 16 01:09:53.263818 containerd[1593]: time="2026-04-16T01:09:53.263703243Z" level=info msg="StopPodSandbox for \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\" returns successfully" Apr 16 01:09:53.285941 containerd[1593]: time="2026-04-16T01:09:53.285465667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4f75b597-sfg9g,Uid:aebb0dae-448b-478a-a00a-811005b5982c,Namespace:calico-system,Attempt:1,}" Apr 16 01:09:53.593443 containerd[1593]: time="2026-04-16T01:09:53.593160001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-28mcm,Uid:9e305132-072a-4841-9d59-183ab9643f4e,Namespace:kube-system,Attempt:1,} returns sandbox id \"5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55\"" Apr 16 01:09:53.659180 kubelet[2807]: E0416 01:09:53.657948 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:53.762975 containerd[1593]: time="2026-04-16T01:09:53.760968907Z" level=info msg="CreateContainer within sandbox \"5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:52.628 [INFO][4897] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:52.629 [INFO][4897] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" iface="eth0" netns="/var/run/netns/cni-cdeb4ac2-56b9-9df5-401f-49dfbef2cea0" Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:52.633 [INFO][4897] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" iface="eth0" netns="/var/run/netns/cni-cdeb4ac2-56b9-9df5-401f-49dfbef2cea0" Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:52.667 [INFO][4897] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" iface="eth0" netns="/var/run/netns/cni-cdeb4ac2-56b9-9df5-401f-49dfbef2cea0" Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:52.667 [INFO][4897] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:52.667 [INFO][4897] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:53.548 [INFO][5012] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" HandleID="k8s-pod-network.c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:53.637 [INFO][5012] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:53.638 [INFO][5012] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:53.735 [WARNING][5012] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" HandleID="k8s-pod-network.c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:53.737 [INFO][5012] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" HandleID="k8s-pod-network.c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:53.814 [INFO][5012] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:54.093379 containerd[1593]: 2026-04-16 01:09:53.944 [INFO][4897] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:09:54.100082 containerd[1593]: time="2026-04-16T01:09:54.099645581Z" level=info msg="TearDown network for sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\" successfully" Apr 16 01:09:54.100082 containerd[1593]: time="2026-04-16T01:09:54.100024996Z" level=info msg="StopPodSandbox for \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\" returns successfully" Apr 16 01:09:54.151604 kubelet[2807]: E0416 01:09:54.151395 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:54.151925 systemd[1]: run-netns-cni\x2dcdeb4ac2\x2d56b9\x2d9df5\x2d401f\x2d49dfbef2cea0.mount: Deactivated successfully. Apr 16 01:09:54.228656 containerd[1593]: time="2026-04-16T01:09:54.228150392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wt9ng,Uid:b148b156-4c3c-440d-9a9c-de6e9bd705a3,Namespace:kube-system,Attempt:1,}" Apr 16 01:09:54.474828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536424771.mount: Deactivated successfully. Apr 16 01:09:54.620197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590638679.mount: Deactivated successfully. Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:53.035 [INFO][4951] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:53.035 [INFO][4951] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" iface="eth0" netns="/var/run/netns/cni-69121a82-0528-787f-941f-179d85f22ef0" Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:53.035 [INFO][4951] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" iface="eth0" netns="/var/run/netns/cni-69121a82-0528-787f-941f-179d85f22ef0" Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:53.036 [INFO][4951] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" iface="eth0" netns="/var/run/netns/cni-69121a82-0528-787f-941f-179d85f22ef0" Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:53.036 [INFO][4951] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:53.036 [INFO][4951] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:53.757 [INFO][5020] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" HandleID="k8s-pod-network.6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:53.780 [INFO][5020] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:53.820 [INFO][5020] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:54.182 [WARNING][5020] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" HandleID="k8s-pod-network.6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:54.226 [INFO][5020] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" HandleID="k8s-pod-network.6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:54.462 [INFO][5020] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:54.852718 containerd[1593]: 2026-04-16 01:09:54.750 [INFO][4951] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:09:54.854651 containerd[1593]: time="2026-04-16T01:09:54.854024751Z" level=info msg="TearDown network for sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\" successfully" Apr 16 01:09:54.854651 containerd[1593]: time="2026-04-16T01:09:54.854111374Z" level=info msg="StopPodSandbox for \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\" returns successfully" Apr 16 01:09:54.889740 containerd[1593]: time="2026-04-16T01:09:54.854849049Z" level=info msg="CreateContainer within sandbox \"5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4600e6e3f0db23a7d864ef8bcecf391fcbd079dea4e544da1b5c3e48c06dff66\"" Apr 16 01:09:54.888926 systemd[1]: run-netns-cni\x2d69121a82\x2d0528\x2d787f\x2d941f\x2d179d85f22ef0.mount: Deactivated successfully. Apr 16 01:09:54.920083 containerd[1593]: time="2026-04-16T01:09:54.917145470Z" level=info msg="StartContainer for \"4600e6e3f0db23a7d864ef8bcecf391fcbd079dea4e544da1b5c3e48c06dff66\"" Apr 16 01:09:54.924642 containerd[1593]: time="2026-04-16T01:09:54.924152515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqrfc,Uid:73d74924-8e40-46ed-8ff0-31c0cdbb144c,Namespace:calico-system,Attempt:1,}" Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:52.373 [INFO][4918] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:52.386 [INFO][4918] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" iface="eth0" netns="/var/run/netns/cni-d8fc8ad7-2527-afdd-f80e-8fa16192dbb3" Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:52.438 [INFO][4918] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" iface="eth0" netns="/var/run/netns/cni-d8fc8ad7-2527-afdd-f80e-8fa16192dbb3" Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:52.439 [INFO][4918] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" iface="eth0" netns="/var/run/netns/cni-d8fc8ad7-2527-afdd-f80e-8fa16192dbb3" Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:52.440 [INFO][4918] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:52.440 [INFO][4918] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:53.803 [INFO][4987] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" HandleID="k8s-pod-network.d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:53.806 [INFO][4987] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:54.463 [INFO][4987] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:54.815 [WARNING][4987] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" HandleID="k8s-pod-network.d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:54.816 [INFO][4987] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" HandleID="k8s-pod-network.d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:54.918 [INFO][4987] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:54.981828 containerd[1593]: 2026-04-16 01:09:54.967 [INFO][4918] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:09:54.989112 containerd[1593]: time="2026-04-16T01:09:54.988969567Z" level=info msg="TearDown network for sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\" successfully" Apr 16 01:09:54.990155 containerd[1593]: time="2026-04-16T01:09:54.990139704Z" level=info msg="StopPodSandbox for \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\" returns successfully" Apr 16 01:09:55.029020 containerd[1593]: time="2026-04-16T01:09:55.028949659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gmdnv,Uid:0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe,Namespace:calico-system,Attempt:1,}" Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:53.450 [INFO][4924] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:53.476 [INFO][4924] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" iface="eth0" netns="/var/run/netns/cni-2b492f12-bd5b-13ac-d322-a6195a24b3c6" Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:53.477 [INFO][4924] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" iface="eth0" netns="/var/run/netns/cni-2b492f12-bd5b-13ac-d322-a6195a24b3c6" Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:53.494 [INFO][4924] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" iface="eth0" netns="/var/run/netns/cni-2b492f12-bd5b-13ac-d322-a6195a24b3c6" Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:53.495 [INFO][4924] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:53.495 [INFO][4924] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:54.979 [INFO][5058] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" HandleID="k8s-pod-network.11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:54.980 [INFO][5058] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:54.980 [INFO][5058] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:55.066 [WARNING][5058] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" HandleID="k8s-pod-network.11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:55.067 [INFO][5058] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" HandleID="k8s-pod-network.11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:55.079 [INFO][5058] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:55.148389 containerd[1593]: 2026-04-16 01:09:55.108 [INFO][4924] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:09:55.148389 containerd[1593]: time="2026-04-16T01:09:55.147765292Z" level=info msg="TearDown network for sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\" successfully" Apr 16 01:09:55.148389 containerd[1593]: time="2026-04-16T01:09:55.147797025Z" level=info msg="StopPodSandbox for \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\" returns successfully" Apr 16 01:09:55.182958 containerd[1593]: time="2026-04-16T01:09:55.182592230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bd47d56c-kk7cd,Uid:091fc483-3bbd-4649-ab92-475b732c9825,Namespace:calico-system,Attempt:1,}" Apr 16 01:09:55.220002 systemd[1]: run-netns-cni\x2d2b492f12\x2dbd5b\x2d13ac\x2dd322\x2da6195a24b3c6.mount: Deactivated successfully. Apr 16 01:09:55.220416 systemd[1]: run-netns-cni\x2dd8fc8ad7\x2d2527\x2dafdd\x2df80e\x2d8fa16192dbb3.mount: Deactivated successfully. Apr 16 01:09:56.003793 containerd[1593]: time="2026-04-16T01:09:56.002453830Z" level=info msg="StartContainer for \"4600e6e3f0db23a7d864ef8bcecf391fcbd079dea4e544da1b5c3e48c06dff66\" returns successfully" Apr 16 01:09:56.430906 kubelet[2807]: E0416 01:09:56.429851 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:57.114082 systemd-networkd[1244]: calibdf8a403f73: Link UP Apr 16 01:09:57.116566 systemd-networkd[1244]: calibdf8a403f73: Gained carrier Apr 16 01:09:57.164702 kubelet[2807]: I0416 01:09:57.163732 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-28mcm" podStartSLOduration=142.163712291 podStartE2EDuration="2m22.163712291s" podCreationTimestamp="2026-04-16 01:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:09:56.538204155 +0000 UTC m=+145.209634116" watchObservedRunningTime="2026-04-16 01:09:57.163712291 +0000 UTC m=+145.835142241" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:54.153 [INFO][5046] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0 calico-kube-controllers-c4f75b597- calico-system aebb0dae-448b-478a-a00a-811005b5982c 1252 0 2026-04-16 01:08:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c4f75b597 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c4f75b597-sfg9g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibdf8a403f73 [] [] }} ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Namespace="calico-system" Pod="calico-kube-controllers-c4f75b597-sfg9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:54.163 [INFO][5046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Namespace="calico-system" Pod="calico-kube-controllers-c4f75b597-sfg9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:55.763 [INFO][5079] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" HandleID="k8s-pod-network.bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:55.992 [INFO][5079] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" HandleID="k8s-pod-network.bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000407040), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c4f75b597-sfg9g", "timestamp":"2026-04-16 01:09:55.762955633 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00027ac60)} Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:55.993 [INFO][5079] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:55.993 [INFO][5079] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:55.994 [INFO][5079] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.175 [INFO][5079] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" host="localhost" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.426 [INFO][5079] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.663 [INFO][5079] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.740 [INFO][5079] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.791 [INFO][5079] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.800 [INFO][5079] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" host="localhost" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.831 [INFO][5079] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.899 [INFO][5079] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" host="localhost" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.957 [INFO][5079] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" host="localhost" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.957 [INFO][5079] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" host="localhost" Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.957 [INFO][5079] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:57.185719 containerd[1593]: 2026-04-16 01:09:56.957 [INFO][5079] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" HandleID="k8s-pod-network.bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:09:57.191044 containerd[1593]: 2026-04-16 01:09:57.022 [INFO][5046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Namespace="calico-system" Pod="calico-kube-controllers-c4f75b597-sfg9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0", GenerateName:"calico-kube-controllers-c4f75b597-", Namespace:"calico-system", SelfLink:"", UID:"aebb0dae-448b-478a-a00a-811005b5982c", ResourceVersion:"1252", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4f75b597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c4f75b597-sfg9g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibdf8a403f73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:57.191044 containerd[1593]: 2026-04-16 01:09:57.040 [INFO][5046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Namespace="calico-system" Pod="calico-kube-controllers-c4f75b597-sfg9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:09:57.191044 containerd[1593]: 2026-04-16 01:09:57.048 [INFO][5046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdf8a403f73 ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Namespace="calico-system" Pod="calico-kube-controllers-c4f75b597-sfg9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:09:57.191044 containerd[1593]: 2026-04-16 01:09:57.122 [INFO][5046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Namespace="calico-system" Pod="calico-kube-controllers-c4f75b597-sfg9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:09:57.191044 containerd[1593]: 2026-04-16 01:09:57.128 [INFO][5046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Namespace="calico-system" Pod="calico-kube-controllers-c4f75b597-sfg9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0", GenerateName:"calico-kube-controllers-c4f75b597-", Namespace:"calico-system", SelfLink:"", UID:"aebb0dae-448b-478a-a00a-811005b5982c", ResourceVersion:"1252", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4f75b597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae", Pod:"calico-kube-controllers-c4f75b597-sfg9g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibdf8a403f73", MAC:"72:be:13:40:29:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:57.191044 containerd[1593]: 2026-04-16 01:09:57.168 [INFO][5046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae" Namespace="calico-system" Pod="calico-kube-controllers-c4f75b597-sfg9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:09:57.298901 containerd[1593]: time="2026-04-16T01:09:57.298612463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:09:57.298901 containerd[1593]: time="2026-04-16T01:09:57.298770848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:09:57.298901 containerd[1593]: time="2026-04-16T01:09:57.298785334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:57.298901 containerd[1593]: time="2026-04-16T01:09:57.298904010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:57.446105 kubelet[2807]: E0416 01:09:57.441662 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:57.476551 systemd-networkd[1244]: calia9bc4d89cc7: Link UP Apr 16 01:09:57.479645 systemd-networkd[1244]: calia9bc4d89cc7: Gained carrier Apr 16 01:09:57.705709 systemd[1]: run-containerd-runc-k8s.io-bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae-runc.Yf3T28.mount: Deactivated successfully. Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:54.740 [INFO][5028] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0 calico-apiserver-68bd47d56c- calico-system 0a26e0b5-baae-47de-8478-3a9191a4d5e8 1253 0 2026-04-16 01:08:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68bd47d56c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68bd47d56c-vlgfc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia9bc4d89cc7 [] [] }} ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-vlgfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:54.817 [INFO][5028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-vlgfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:55.871 [INFO][5091] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" HandleID="k8s-pod-network.bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:56.131 [INFO][5091] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" HandleID="k8s-pod-network.bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fc8f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-68bd47d56c-vlgfc", "timestamp":"2026-04-16 01:09:55.871165359 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000726420)} Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:56.131 [INFO][5091] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:56.965 [INFO][5091] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:56.965 [INFO][5091] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.047 [INFO][5091] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" host="localhost" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.131 [INFO][5091] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.168 [INFO][5091] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.193 [INFO][5091] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.202 [INFO][5091] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.203 [INFO][5091] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" host="localhost" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.209 [INFO][5091] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.286 [INFO][5091] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" host="localhost" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.430 [INFO][5091] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" host="localhost" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.430 [INFO][5091] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" host="localhost" Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.430 [INFO][5091] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:57.709026 containerd[1593]: 2026-04-16 01:09:57.430 [INFO][5091] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" HandleID="k8s-pod-network.bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:09:57.709985 containerd[1593]: 2026-04-16 01:09:57.437 [INFO][5028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-vlgfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0", GenerateName:"calico-apiserver-68bd47d56c-", Namespace:"calico-system", SelfLink:"", UID:"0a26e0b5-baae-47de-8478-3a9191a4d5e8", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bd47d56c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68bd47d56c-vlgfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia9bc4d89cc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:57.709985 containerd[1593]: 2026-04-16 01:09:57.437 [INFO][5028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-vlgfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:09:57.709985 containerd[1593]: 2026-04-16 01:09:57.437 [INFO][5028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9bc4d89cc7 ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-vlgfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:09:57.709985 containerd[1593]: 2026-04-16 01:09:57.482 [INFO][5028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-vlgfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:09:57.709985 containerd[1593]: 2026-04-16 01:09:57.490 [INFO][5028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-vlgfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0", GenerateName:"calico-apiserver-68bd47d56c-", Namespace:"calico-system", SelfLink:"", UID:"0a26e0b5-baae-47de-8478-3a9191a4d5e8", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bd47d56c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb", Pod:"calico-apiserver-68bd47d56c-vlgfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia9bc4d89cc7", MAC:"66:99:bc:4e:7f:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:57.709985 containerd[1593]: 2026-04-16 01:09:57.612 [INFO][5028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-vlgfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:09:57.862664 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:09:57.987879 containerd[1593]: time="2026-04-16T01:09:57.961211927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:09:57.987879 containerd[1593]: time="2026-04-16T01:09:57.961974746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:09:57.987879 containerd[1593]: time="2026-04-16T01:09:57.961995993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:57.987879 containerd[1593]: time="2026-04-16T01:09:57.962912168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:58.209992 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:09:58.362947 systemd-networkd[1244]: calibdf8a403f73: Gained IPv6LL Apr 16 01:09:58.454454 systemd-networkd[1244]: cali40aa935c1ff: Link UP Apr 16 01:09:58.469189 kubelet[2807]: E0416 01:09:58.469163 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:58.469919 systemd-networkd[1244]: cali40aa935c1ff: Gained carrier Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:55.802 [INFO][5092] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0 coredns-674b8bbfcf- kube-system b148b156-4c3c-440d-9a9c-de6e9bd705a3 1265 0 2026-04-16 01:07:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wt9ng eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40aa935c1ff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Namespace="kube-system" Pod="coredns-674b8bbfcf-wt9ng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wt9ng-" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:55.803 [INFO][5092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Namespace="kube-system" Pod="coredns-674b8bbfcf-wt9ng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:56.716 [INFO][5190] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" HandleID="k8s-pod-network.a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:56.900 [INFO][5190] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" HandleID="k8s-pod-network.a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004067e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wt9ng", "timestamp":"2026-04-16 01:09:56.716597326 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005b2c60)} Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:56.900 [INFO][5190] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:57.433 [INFO][5190] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:57.433 [INFO][5190] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:57.477 [INFO][5190] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" host="localhost" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:57.796 [INFO][5190] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:57.850 [INFO][5190] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:58.017 [INFO][5190] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:58.141 [INFO][5190] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:58.143 [INFO][5190] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" host="localhost" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:58.181 [INFO][5190] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62 Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:58.332 [INFO][5190] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" host="localhost" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:58.375 [INFO][5190] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" host="localhost" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:58.375 [INFO][5190] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" host="localhost" Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:58.378 [INFO][5190] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:58.539915 containerd[1593]: 2026-04-16 01:09:58.382 [INFO][5190] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" HandleID="k8s-pod-network.a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:09:58.548082 containerd[1593]: 2026-04-16 01:09:58.410 [INFO][5092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Namespace="kube-system" Pod="coredns-674b8bbfcf-wt9ng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b148b156-4c3c-440d-9a9c-de6e9bd705a3", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 7, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wt9ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40aa935c1ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:58.548082 containerd[1593]: 2026-04-16 01:09:58.421 [INFO][5092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Namespace="kube-system" Pod="coredns-674b8bbfcf-wt9ng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:09:58.548082 containerd[1593]: 2026-04-16 01:09:58.422 [INFO][5092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40aa935c1ff ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Namespace="kube-system" Pod="coredns-674b8bbfcf-wt9ng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:09:58.548082 containerd[1593]: 2026-04-16 01:09:58.466 [INFO][5092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Namespace="kube-system" Pod="coredns-674b8bbfcf-wt9ng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:09:58.548082 containerd[1593]: 2026-04-16 01:09:58.466 [INFO][5092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Namespace="kube-system" Pod="coredns-674b8bbfcf-wt9ng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b148b156-4c3c-440d-9a9c-de6e9bd705a3", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 7, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62", Pod:"coredns-674b8bbfcf-wt9ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40aa935c1ff", MAC:"8a:4f:48:cd:38:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:58.548082 containerd[1593]: 2026-04-16 01:09:58.525 [INFO][5092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62" Namespace="kube-system" Pod="coredns-674b8bbfcf-wt9ng" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:09:58.543742 systemd-networkd[1244]: calia9bc4d89cc7: Gained IPv6LL Apr 16 01:09:58.574059 containerd[1593]: time="2026-04-16T01:09:58.570013458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c4f75b597-sfg9g,Uid:aebb0dae-448b-478a-a00a-811005b5982c,Namespace:calico-system,Attempt:1,} returns sandbox id \"bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae\"" Apr 16 01:09:58.589677 containerd[1593]: time="2026-04-16T01:09:58.586456548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 01:09:58.612560 containerd[1593]: time="2026-04-16T01:09:58.612002875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:09:58.612560 containerd[1593]: time="2026-04-16T01:09:58.612049540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:09:58.612560 containerd[1593]: time="2026-04-16T01:09:58.612059716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:58.612560 containerd[1593]: time="2026-04-16T01:09:58.612125458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:58.629180 containerd[1593]: time="2026-04-16T01:09:58.628348479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bd47d56c-vlgfc,Uid:0a26e0b5-baae-47de-8478-3a9191a4d5e8,Namespace:calico-system,Attempt:1,} returns sandbox id \"bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb\"" Apr 16 01:09:58.691089 systemd-networkd[1244]: calia19ad0c984a: Link UP Apr 16 01:09:58.695001 systemd-networkd[1244]: calia19ad0c984a: Gained carrier Apr 16 01:09:58.834178 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:56.080 [INFO][5117] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gqrfc-eth0 csi-node-driver- calico-system 73d74924-8e40-46ed-8ff0-31c0cdbb144c 1268 0 2026-04-16 01:08:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gqrfc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia19ad0c984a [] [] }} ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Namespace="calico-system" Pod="csi-node-driver-gqrfc" WorkloadEndpoint="localhost-k8s-csi--node--driver--gqrfc-" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:56.096 [INFO][5117] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Namespace="calico-system" Pod="csi-node-driver-gqrfc" WorkloadEndpoint="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:56.963 [INFO][5207] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" HandleID="k8s-pod-network.75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:57.105 [INFO][5207] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" HandleID="k8s-pod-network.75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gqrfc", "timestamp":"2026-04-16 01:09:56.963894962 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004802c0)} Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:57.105 [INFO][5207] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.375 [INFO][5207] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.375 [INFO][5207] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.393 [INFO][5207] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" host="localhost" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.439 [INFO][5207] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.519 [INFO][5207] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.523 [INFO][5207] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.531 [INFO][5207] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.531 [INFO][5207] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" host="localhost" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.537 [INFO][5207] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225 Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.578 [INFO][5207] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" host="localhost" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.614 [INFO][5207] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" host="localhost" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.616 [INFO][5207] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" host="localhost" Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.617 [INFO][5207] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:58.933013 containerd[1593]: 2026-04-16 01:09:58.617 [INFO][5207] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" HandleID="k8s-pod-network.75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:09:58.934126 containerd[1593]: 2026-04-16 01:09:58.633 [INFO][5117] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Namespace="calico-system" Pod="csi-node-driver-gqrfc" WorkloadEndpoint="localhost-k8s-csi--node--driver--gqrfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gqrfc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"73d74924-8e40-46ed-8ff0-31c0cdbb144c", ResourceVersion:"1268", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gqrfc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia19ad0c984a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:58.934126 containerd[1593]: 2026-04-16 01:09:58.657 [INFO][5117] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Namespace="calico-system" Pod="csi-node-driver-gqrfc" WorkloadEndpoint="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:09:58.934126 containerd[1593]: 2026-04-16 01:09:58.657 [INFO][5117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia19ad0c984a ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Namespace="calico-system" Pod="csi-node-driver-gqrfc" WorkloadEndpoint="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:09:58.934126 containerd[1593]: 2026-04-16 01:09:58.680 [INFO][5117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Namespace="calico-system" Pod="csi-node-driver-gqrfc" WorkloadEndpoint="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:09:58.934126 containerd[1593]: 2026-04-16 01:09:58.733 [INFO][5117] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Namespace="calico-system" Pod="csi-node-driver-gqrfc" WorkloadEndpoint="localhost-k8s-csi--node--driver--gqrfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gqrfc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"73d74924-8e40-46ed-8ff0-31c0cdbb144c", ResourceVersion:"1268", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225", Pod:"csi-node-driver-gqrfc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia19ad0c984a", MAC:"c2:34:4d:7b:9a:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:58.934126 containerd[1593]: 2026-04-16 01:09:58.913 [INFO][5117] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225" Namespace="calico-system" Pod="csi-node-driver-gqrfc" WorkloadEndpoint="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:09:59.171432 containerd[1593]: time="2026-04-16T01:09:59.149669566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:09:59.171432 containerd[1593]: time="2026-04-16T01:09:59.159336068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:09:59.171432 containerd[1593]: time="2026-04-16T01:09:59.159349756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:59.171432 containerd[1593]: time="2026-04-16T01:09:59.159628636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:59.191653 containerd[1593]: time="2026-04-16T01:09:59.190588555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wt9ng,Uid:b148b156-4c3c-440d-9a9c-de6e9bd705a3,Namespace:kube-system,Attempt:1,} returns sandbox id \"a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62\"" Apr 16 01:09:59.219614 kubelet[2807]: E0416 01:09:59.219417 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:59.290626 systemd-networkd[1244]: calib1f56873766: Link UP Apr 16 01:09:59.351654 systemd-networkd[1244]: calib1f56873766: Gained carrier Apr 16 01:09:59.383709 containerd[1593]: time="2026-04-16T01:09:59.368774297Z" level=info msg="CreateContainer within sandbox \"a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 01:09:59.566832 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:56.835 [INFO][5131] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--gmdnv-eth0 goldmane-5b85766d88- calico-system 0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe 1263 0 2026-04-16 01:08:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-gmdnv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib1f56873766 [] [] }} ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Namespace="calico-system" Pod="goldmane-5b85766d88-gmdnv" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gmdnv-" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:56.835 [INFO][5131] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Namespace="calico-system" Pod="goldmane-5b85766d88-gmdnv" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:57.134 [INFO][5229] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" HandleID="k8s-pod-network.3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:57.169 [INFO][5229] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" HandleID="k8s-pod-network.3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-gmdnv", "timestamp":"2026-04-16 01:09:57.134907024 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000196000)} Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:57.169 [INFO][5229] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:58.617 [INFO][5229] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:58.617 [INFO][5229] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:58.623 [INFO][5229] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" host="localhost" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:58.633 [INFO][5229] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:58.729 [INFO][5229] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:58.817 [INFO][5229] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:58.932 [INFO][5229] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:58.932 [INFO][5229] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" host="localhost" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:58.992 [INFO][5229] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7 Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:59.097 [INFO][5229] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" host="localhost" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:59.219 [INFO][5229] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" host="localhost" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:59.220 [INFO][5229] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" host="localhost" Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:59.220 [INFO][5229] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:09:59.609886 containerd[1593]: 2026-04-16 01:09:59.220 [INFO][5229] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" HandleID="k8s-pod-network.3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:09:59.613212 containerd[1593]: 2026-04-16 01:09:59.251 [INFO][5131] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Namespace="calico-system" Pod="goldmane-5b85766d88-gmdnv" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--gmdnv-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe", ResourceVersion:"1263", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-gmdnv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1f56873766", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:59.613212 containerd[1593]: 2026-04-16 01:09:59.251 [INFO][5131] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Namespace="calico-system" Pod="goldmane-5b85766d88-gmdnv" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:09:59.613212 containerd[1593]: 2026-04-16 01:09:59.251 [INFO][5131] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1f56873766 ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Namespace="calico-system" Pod="goldmane-5b85766d88-gmdnv" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:09:59.613212 containerd[1593]: 2026-04-16 01:09:59.356 [INFO][5131] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Namespace="calico-system" Pod="goldmane-5b85766d88-gmdnv" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:09:59.613212 containerd[1593]: 2026-04-16 01:09:59.357 [INFO][5131] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Namespace="calico-system" Pod="goldmane-5b85766d88-gmdnv" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--gmdnv-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe", ResourceVersion:"1263", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7", Pod:"goldmane-5b85766d88-gmdnv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1f56873766", MAC:"ae:94:f1:b4:1b:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:09:59.613212 containerd[1593]: 2026-04-16 01:09:59.524 [INFO][5131] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7" Namespace="calico-system" Pod="goldmane-5b85766d88-gmdnv" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:09:59.651513 kubelet[2807]: E0416 01:09:59.651061 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:09:59.659092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988967960.mount: Deactivated successfully. Apr 16 01:09:59.682570 containerd[1593]: time="2026-04-16T01:09:59.679404377Z" level=info msg="CreateContainer within sandbox \"a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fe39562659ba8030bfbc9b3670e79214c932bf3c049cc09927bc82318547cd6\"" Apr 16 01:09:59.705100 containerd[1593]: time="2026-04-16T01:09:59.702972136Z" level=info msg="StartContainer for \"4fe39562659ba8030bfbc9b3670e79214c932bf3c049cc09927bc82318547cd6\"" Apr 16 01:09:59.848416 containerd[1593]: time="2026-04-16T01:09:59.847747163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gqrfc,Uid:73d74924-8e40-46ed-8ff0-31c0cdbb144c,Namespace:calico-system,Attempt:1,} returns sandbox id \"75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225\"" Apr 16 01:09:59.971399 containerd[1593]: time="2026-04-16T01:09:59.924004251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:09:59.974548 containerd[1593]: time="2026-04-16T01:09:59.972958682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:09:59.974548 containerd[1593]: time="2026-04-16T01:09:59.973444998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:59.974548 containerd[1593]: time="2026-04-16T01:09:59.973971966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:09:59.979074 systemd-networkd[1244]: cali8693cc81378: Link UP Apr 16 01:09:59.988913 systemd-networkd[1244]: cali8693cc81378: Gained carrier Apr 16 01:10:00.014804 systemd-networkd[1244]: cali40aa935c1ff: Gained IPv6LL Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:56.712 [INFO][5155] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0 calico-apiserver-68bd47d56c- calico-system 091fc483-3bbd-4649-ab92-475b732c9825 1270 0 2026-04-16 01:08:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68bd47d56c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68bd47d56c-kk7cd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8693cc81378 [] [] }} ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-kk7cd" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:56.718 [INFO][5155] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-kk7cd" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:57.136 [INFO][5221] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" HandleID="k8s-pod-network.91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:57.170 [INFO][5221] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" HandleID="k8s-pod-network.91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d2120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-68bd47d56c-kk7cd", "timestamp":"2026-04-16 01:09:57.136984051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00023c000)} Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:57.170 [INFO][5221] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.220 [INFO][5221] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.220 [INFO][5221] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.244 [INFO][5221] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" host="localhost" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.304 [INFO][5221] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.550 [INFO][5221] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.605 [INFO][5221] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.647 [INFO][5221] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.647 [INFO][5221] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" host="localhost" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.657 [INFO][5221] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4 Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.685 [INFO][5221] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" host="localhost" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.829 [INFO][5221] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" host="localhost" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.830 [INFO][5221] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" host="localhost" Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.830 [INFO][5221] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:10:00.187695 containerd[1593]: 2026-04-16 01:09:59.830 [INFO][5221] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" HandleID="k8s-pod-network.91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:10:00.190817 containerd[1593]: 2026-04-16 01:09:59.848 [INFO][5155] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-kk7cd" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0", GenerateName:"calico-apiserver-68bd47d56c-", Namespace:"calico-system", SelfLink:"", UID:"091fc483-3bbd-4649-ab92-475b732c9825", ResourceVersion:"1270", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bd47d56c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68bd47d56c-kk7cd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8693cc81378", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:10:00.190817 containerd[1593]: 2026-04-16 01:09:59.848 [INFO][5155] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-kk7cd" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:10:00.190817 containerd[1593]: 2026-04-16 01:09:59.848 [INFO][5155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8693cc81378 ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-kk7cd" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:10:00.190817 containerd[1593]: 2026-04-16 01:10:00.003 [INFO][5155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-kk7cd" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:10:00.190817 containerd[1593]: 2026-04-16 01:10:00.010 [INFO][5155] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-kk7cd" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0", GenerateName:"calico-apiserver-68bd47d56c-", Namespace:"calico-system", SelfLink:"", UID:"091fc483-3bbd-4649-ab92-475b732c9825", ResourceVersion:"1270", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bd47d56c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4", Pod:"calico-apiserver-68bd47d56c-kk7cd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8693cc81378", MAC:"46:88:07:8e:94:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:10:00.190817 containerd[1593]: 2026-04-16 01:10:00.172 [INFO][5155] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4" Namespace="calico-system" Pod="calico-apiserver-68bd47d56c-kk7cd" WorkloadEndpoint="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:10:00.400865 systemd-networkd[1244]: calia19ad0c984a: Gained IPv6LL Apr 16 01:10:00.407884 systemd-networkd[1244]: calib1f56873766: Gained IPv6LL Apr 16 01:10:00.436768 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:10:00.488061 containerd[1593]: time="2026-04-16T01:10:00.478161394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:10:00.488061 containerd[1593]: time="2026-04-16T01:10:00.478550391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:10:00.488061 containerd[1593]: time="2026-04-16T01:10:00.478560746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:10:00.488061 containerd[1593]: time="2026-04-16T01:10:00.478746822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:10:00.511927 containerd[1593]: time="2026-04-16T01:10:00.505043594Z" level=info msg="StartContainer for \"4fe39562659ba8030bfbc9b3670e79214c932bf3c049cc09927bc82318547cd6\" returns successfully" Apr 16 01:10:00.762152 containerd[1593]: time="2026-04-16T01:10:00.759189150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gmdnv,Uid:0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe,Namespace:calico-system,Attempt:1,} returns sandbox id \"3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7\"" Apr 16 01:10:00.774677 kubelet[2807]: E0416 01:10:00.774576 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:00.829905 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:10:00.890199 kubelet[2807]: I0416 01:10:00.888364 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wt9ng" podStartSLOduration=145.888346264 podStartE2EDuration="2m25.888346264s" podCreationTimestamp="2026-04-16 01:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:10:00.885734051 +0000 UTC m=+149.557163998" watchObservedRunningTime="2026-04-16 01:10:00.888346264 +0000 UTC m=+149.559776217" Apr 16 01:10:01.266876 containerd[1593]: time="2026-04-16T01:10:01.266646694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bd47d56c-kk7cd,Uid:091fc483-3bbd-4649-ab92-475b732c9825,Namespace:calico-system,Attempt:1,} returns sandbox id \"91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4\"" Apr 16 01:10:01.365435 systemd-networkd[1244]: cali8693cc81378: Gained IPv6LL Apr 16 01:10:01.860907 kubelet[2807]: E0416 01:10:01.860676 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:02.891709 kubelet[2807]: E0416 01:10:02.891198 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:03.643531 systemd[1]: Started sshd@9-10.0.0.62:22-10.0.0.1:48246.service - OpenSSH per-connection server daemon (10.0.0.1:48246). Apr 16 01:10:04.129425 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 48246 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:10:04.132360 sshd[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:04.153785 systemd-logind[1572]: New session 10 of user core. Apr 16 01:10:04.168375 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 01:10:04.754905 sshd[5644]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:04.804098 systemd[1]: sshd@9-10.0.0.62:22-10.0.0.1:48246.service: Deactivated successfully. Apr 16 01:10:04.821011 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 01:10:04.824017 systemd-logind[1572]: Session 10 logged out. Waiting for processes to exit. Apr 16 01:10:04.829940 systemd-logind[1572]: Removed session 10. Apr 16 01:10:07.444700 containerd[1593]: time="2026-04-16T01:10:07.444069589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:10:07.451575 containerd[1593]: time="2026-04-16T01:10:07.451409644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 16 01:10:07.463360 containerd[1593]: time="2026-04-16T01:10:07.462536093Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:10:07.475966 containerd[1593]: time="2026-04-16T01:10:07.475727275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:10:07.479122 containerd[1593]: time="2026-04-16T01:10:07.478933646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 8.892373947s" Apr 16 01:10:07.479122 containerd[1593]: time="2026-04-16T01:10:07.479051477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 16 01:10:07.484978 containerd[1593]: time="2026-04-16T01:10:07.484591340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 01:10:07.547148 containerd[1593]: time="2026-04-16T01:10:07.547006608Z" level=info msg="CreateContainer within sandbox \"bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 01:10:07.602912 containerd[1593]: time="2026-04-16T01:10:07.602527304Z" level=info msg="CreateContainer within sandbox \"bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c46c228a4f5328eb8176ee22f41bed3ac8dc927d69a1e1d88e2fc958de0937e3\"" Apr 16 01:10:07.606422 containerd[1593]: time="2026-04-16T01:10:07.606344499Z" level=info msg="StartContainer for \"c46c228a4f5328eb8176ee22f41bed3ac8dc927d69a1e1d88e2fc958de0937e3\"" Apr 16 01:10:07.992787 containerd[1593]: time="2026-04-16T01:10:07.992337519Z" level=info msg="StartContainer for \"c46c228a4f5328eb8176ee22f41bed3ac8dc927d69a1e1d88e2fc958de0937e3\" returns successfully" Apr 16 01:10:08.158629 kubelet[2807]: I0416 01:10:08.154576 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c4f75b597-sfg9g" podStartSLOduration=80.256397368 podStartE2EDuration="1m29.154529774s" podCreationTimestamp="2026-04-16 01:08:39 +0000 UTC" firstStartedPulling="2026-04-16 01:09:58.58557494 +0000 UTC m=+147.257004883" lastFinishedPulling="2026-04-16 01:10:07.483707347 +0000 UTC m=+156.155137289" observedRunningTime="2026-04-16 01:10:08.152844428 +0000 UTC m=+156.824274382" watchObservedRunningTime="2026-04-16 01:10:08.154529774 +0000 UTC m=+156.825959716" Apr 16 01:10:09.633817 kubelet[2807]: E0416 01:10:09.629428 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:09.643675 kubelet[2807]: E0416 01:10:09.643654 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:10.049014 systemd[1]: Started sshd@10-10.0.0.62:22-10.0.0.1:55252.service - OpenSSH per-connection server daemon (10.0.0.1:55252). Apr 16 01:10:10.436927 sshd[5797]: Accepted publickey for core from 10.0.0.1 port 55252 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:10:10.444940 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:10.476969 systemd-logind[1572]: New session 11 of user core. Apr 16 01:10:10.488120 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 01:10:12.839171 sshd[5797]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:12.976402 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:10:12.884790 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:10:12.884958 systemd-resolved[1467]: Flushed all caches. Apr 16 01:10:12.943631 systemd[1]: sshd@10-10.0.0.62:22-10.0.0.1:55252.service: Deactivated successfully. Apr 16 01:10:12.977935 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 01:10:12.998629 systemd-logind[1572]: Session 11 logged out. Waiting for processes to exit. Apr 16 01:10:13.089845 systemd-logind[1572]: Removed session 11. Apr 16 01:10:14.935038 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:10:14.935904 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:10:14.935045 systemd-resolved[1467]: Flushed all caches. Apr 16 01:10:17.565436 kubelet[2807]: E0416 01:10:17.564951 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:17.910856 systemd[1]: Started sshd@11-10.0.0.62:22-10.0.0.1:55264.service - OpenSSH per-connection server daemon (10.0.0.1:55264). Apr 16 01:10:18.236939 sshd[5859]: Accepted publickey for core from 10.0.0.1 port 55264 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:10:18.237989 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:18.249716 systemd-logind[1572]: New session 12 of user core. Apr 16 01:10:18.308851 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 01:10:18.667936 kubelet[2807]: E0416 01:10:18.667571 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:19.230590 containerd[1593]: time="2026-04-16T01:10:19.225132928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:10:19.230590 containerd[1593]: time="2026-04-16T01:10:19.228828939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 16 01:10:19.275674 containerd[1593]: time="2026-04-16T01:10:19.272809610Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:10:19.398818 containerd[1593]: time="2026-04-16T01:10:19.394165442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:10:19.414977 containerd[1593]: time="2026-04-16T01:10:19.414917495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 11.930203164s" Apr 16 01:10:19.415964 containerd[1593]: time="2026-04-16T01:10:19.415135496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 01:10:19.431906 containerd[1593]: time="2026-04-16T01:10:19.431614960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 01:10:19.493414 containerd[1593]: time="2026-04-16T01:10:19.492187784Z" level=info msg="CreateContainer within sandbox \"bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 01:10:19.594428 containerd[1593]: time="2026-04-16T01:10:19.593970358Z" level=info msg="CreateContainer within sandbox \"bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc956a295cfc616cf2c205ca6f38483432a7a3ea9f7fbf7dd8d6637dba924b7a\"" Apr 16 01:10:19.605041 containerd[1593]: time="2026-04-16T01:10:19.599177622Z" level=info msg="StartContainer for \"cc956a295cfc616cf2c205ca6f38483432a7a3ea9f7fbf7dd8d6637dba924b7a\"" Apr 16 01:10:19.648594 sshd[5859]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:19.671801 systemd[1]: sshd@11-10.0.0.62:22-10.0.0.1:55264.service: Deactivated successfully. Apr 16 01:10:19.683909 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 01:10:19.684626 systemd-logind[1572]: Session 12 logged out. Waiting for processes to exit. Apr 16 01:10:19.689701 systemd-logind[1572]: Removed session 12. Apr 16 01:10:20.433173 containerd[1593]: time="2026-04-16T01:10:20.432767468Z" level=info msg="StartContainer for \"cc956a295cfc616cf2c205ca6f38483432a7a3ea9f7fbf7dd8d6637dba924b7a\" returns successfully" Apr 16 01:10:20.893847 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:10:20.880658 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:10:20.880682 systemd-resolved[1467]: Flushed all caches. Apr 16 01:10:21.217163 kubelet[2807]: I0416 01:10:21.216132 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-68bd47d56c-vlgfc" podStartSLOduration=93.453233849 podStartE2EDuration="1m54.216119149s" podCreationTimestamp="2026-04-16 01:08:27 +0000 UTC" firstStartedPulling="2026-04-16 01:09:58.665546566 +0000 UTC m=+147.336976508" lastFinishedPulling="2026-04-16 01:10:19.428431863 +0000 UTC m=+168.099861808" observedRunningTime="2026-04-16 01:10:21.215170924 +0000 UTC m=+169.886600867" watchObservedRunningTime="2026-04-16 01:10:21.216119149 +0000 UTC m=+169.887549099" Apr 16 01:10:23.120877 containerd[1593]: time="2026-04-16T01:10:23.118383545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:10:23.224928 containerd[1593]: time="2026-04-16T01:10:23.132379484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 01:10:23.224928 containerd[1593]: time="2026-04-16T01:10:23.195772848Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:10:23.224928 containerd[1593]: time="2026-04-16T01:10:23.219598636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:10:23.224928 containerd[1593]: time="2026-04-16T01:10:23.221925202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 3.79027254s" Apr 16 01:10:23.224928 containerd[1593]: time="2026-04-16T01:10:23.221951806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 01:10:23.270597 containerd[1593]: time="2026-04-16T01:10:23.268453865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 01:10:23.303007 containerd[1593]: time="2026-04-16T01:10:23.293828823Z" level=info msg="CreateContainer within sandbox \"75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 01:10:23.427098 containerd[1593]: time="2026-04-16T01:10:23.419420664Z" level=info msg="CreateContainer within sandbox \"75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1c8e6a37eeb00c867dff709c38a6288f40deddd45a9b0ad8374b3b548f9e9b05\"" Apr 16 01:10:23.436211 containerd[1593]: time="2026-04-16T01:10:23.435436578Z" level=info msg="StartContainer for \"1c8e6a37eeb00c867dff709c38a6288f40deddd45a9b0ad8374b3b548f9e9b05\"" Apr 16 01:10:24.157661 containerd[1593]: time="2026-04-16T01:10:24.156615937Z" level=info msg="StartContainer for \"1c8e6a37eeb00c867dff709c38a6288f40deddd45a9b0ad8374b3b548f9e9b05\" returns successfully" Apr 16 01:10:24.748123 systemd[1]: Started sshd@12-10.0.0.62:22-10.0.0.1:58762.service - OpenSSH per-connection server daemon (10.0.0.1:58762). Apr 16 01:10:25.069857 sshd[5966]: Accepted publickey for core from 10.0.0.1 port 58762 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:10:25.127033 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:25.167621 systemd-logind[1572]: New session 13 of user core. Apr 16 01:10:25.178921 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 01:10:26.596016 sshd[5966]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:26.644909 systemd[1]: sshd@12-10.0.0.62:22-10.0.0.1:58762.service: Deactivated successfully. Apr 16 01:10:26.739882 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 01:10:26.769726 systemd-logind[1572]: Session 13 logged out. Waiting for processes to exit. Apr 16 01:10:26.814607 systemd-logind[1572]: Removed session 13. Apr 16 01:10:29.573698 kubelet[2807]: E0416 01:10:29.572016 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:10:31.670406 systemd[1]: Started sshd@13-10.0.0.62:22-10.0.0.1:43566.service - OpenSSH per-connection server daemon (10.0.0.1:43566). Apr 16 01:10:32.087035 sshd[6018]: Accepted publickey for core from 10.0.0.1 port 43566 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:10:32.174163 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:32.229059 systemd-logind[1572]: New session 14 of user core. Apr 16 01:10:32.234968 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 01:10:33.597084 containerd[1593]: time="2026-04-16T01:10:33.594814581Z" level=info msg="StopPodSandbox for \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\"" Apr 16 01:10:34.932768 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:10:34.931798 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:10:34.931956 systemd-resolved[1467]: Flushed all caches. Apr 16 01:10:36.325847 sshd[6018]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:36.422056 systemd[1]: sshd@13-10.0.0.62:22-10.0.0.1:43566.service: Deactivated successfully. Apr 16 01:10:36.467433 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 01:10:36.488119 systemd-logind[1572]: Session 14 logged out. Waiting for processes to exit. Apr 16 01:10:36.663444 systemd-logind[1572]: Removed session 14. Apr 16 01:10:36.976698 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:10:36.969949 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:10:36.973980 systemd-resolved[1467]: Flushed all caches. Apr 16 01:10:40.970703 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:10:40.920469 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:10:40.939858 systemd-resolved[1467]: Flushed all caches. Apr 16 01:10:41.772488 systemd[1]: Started sshd@14-10.0.0.62:22-10.0.0.1:47082.service - OpenSSH per-connection server daemon (10.0.0.1:47082). Apr 16 01:10:43.050116 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:10:43.051193 kubelet[2807]: E0416 01:10:43.047763 2807 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.079s" Apr 16 01:10:43.051995 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:10:43.052003 systemd-resolved[1467]: Flushed all caches. Apr 16 01:10:44.063045 sshd[6076]: Accepted publickey for core from 10.0.0.1 port 47082 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:10:44.046876 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:44.152734 kubelet[2807]: E0416 01:10:44.082787 2807 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.034s" Apr 16 01:10:44.583138 systemd-logind[1572]: New session 15 of user core. Apr 16 01:10:44.626146 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 01:10:46.958800 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:10:46.932150 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:10:46.932164 systemd-resolved[1467]: Flushed all caches. Apr 16 01:10:47.338023 sshd[6076]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:47.677212 systemd[1]: sshd@14-10.0.0.62:22-10.0.0.1:47082.service: Deactivated successfully. Apr 16 01:10:47.850450 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 01:10:47.870996 systemd-logind[1572]: Session 15 logged out. Waiting for processes to exit. Apr 16 01:10:48.187201 systemd-logind[1572]: Removed session 15. Apr 16 01:10:49.218763 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:10:49.022055 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:10:49.022856 systemd-resolved[1467]: Flushed all caches. Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:38.393 [WARNING][6046] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b148b156-4c3c-440d-9a9c-de6e9bd705a3", ResourceVersion:"1332", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 7, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62", Pod:"coredns-674b8bbfcf-wt9ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40aa935c1ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:38.418 [INFO][6046] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:38.419 [INFO][6046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" iface="eth0" netns="" Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:38.419 [INFO][6046] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:38.419 [INFO][6046] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:47.933 [INFO][6061] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" HandleID="k8s-pod-network.c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:47.987 [INFO][6061] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:48.176 [INFO][6061] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:50.654 [WARNING][6061] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" HandleID="k8s-pod-network.c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:50.681 [INFO][6061] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" HandleID="k8s-pod-network.c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:51.073 [INFO][6061] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:10:51.428549 containerd[1593]: 2026-04-16 01:10:51.171 [INFO][6046] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:10:51.461886 containerd[1593]: time="2026-04-16T01:10:51.460675605Z" level=info msg="TearDown network for sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\" successfully" Apr 16 01:10:51.461886 containerd[1593]: time="2026-04-16T01:10:51.460765237Z" level=info msg="StopPodSandbox for \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\" returns successfully" Apr 16 01:10:51.461140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount294361776.mount: Deactivated successfully. Apr 16 01:10:52.380557 systemd[1]: Started sshd@15-10.0.0.62:22-10.0.0.1:43360.service - OpenSSH per-connection server daemon (10.0.0.1:43360). Apr 16 01:10:54.055885 containerd[1593]: time="2026-04-16T01:10:54.055694567Z" level=info msg="RemovePodSandbox for \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\"" Apr 16 01:10:54.211390 sshd[6139]: Accepted publickey for core from 10.0.0.1 port 43360 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:10:54.271764 sshd[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:10:54.295885 containerd[1593]: time="2026-04-16T01:10:54.273139780Z" level=info msg="Forcibly stopping sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\"" Apr 16 01:10:55.350066 systemd-logind[1572]: New session 16 of user core. Apr 16 01:10:55.530103 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 01:10:57.065965 kubelet[2807]: E0416 01:10:57.062208 2807 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.403s" Apr 16 01:10:58.692125 sshd[6139]: pam_unix(sshd:session): session closed for user core Apr 16 01:10:58.855148 systemd[1]: Started sshd@16-10.0.0.62:22-10.0.0.1:43370.service - OpenSSH per-connection server daemon (10.0.0.1:43370). Apr 16 01:10:58.867948 systemd[1]: sshd@15-10.0.0.62:22-10.0.0.1:43360.service: Deactivated successfully. Apr 16 01:10:58.960502 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 01:10:58.998133 systemd-logind[1572]: Session 16 logged out. Waiting for processes to exit. Apr 16 01:10:59.786393 systemd-logind[1572]: Removed session 16. Apr 16 01:11:02.178571 sshd[6172]: Accepted publickey for core from 10.0.0.1 port 43370 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:02.576755 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:02.711725 systemd-logind[1572]: New session 17 of user core. Apr 16 01:11:02.739823 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 01:11:05.791524 kubelet[2807]: E0416 01:11:05.791158 2807 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.227s" Apr 16 01:11:06.613847 kubelet[2807]: E0416 01:11:06.605081 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:11:06.973085 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:11:06.899990 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:11:06.899997 systemd-resolved[1467]: Flushed all caches. Apr 16 01:11:08.555842 sshd[6172]: pam_unix(sshd:session): session closed for user core Apr 16 01:11:08.790583 systemd[1]: Started sshd@17-10.0.0.62:22-10.0.0.1:51720.service - OpenSSH per-connection server daemon (10.0.0.1:51720). Apr 16 01:11:08.828545 systemd-logind[1572]: Session 17 logged out. Waiting for processes to exit. Apr 16 01:11:08.863521 systemd[1]: sshd@16-10.0.0.62:22-10.0.0.1:43370.service: Deactivated successfully. Apr 16 01:11:08.910020 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 01:11:08.998845 systemd-logind[1572]: Removed session 17. Apr 16 01:11:09.727846 sshd[6220]: Accepted publickey for core from 10.0.0.1 port 51720 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:09.733048 sshd[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:09.924504 systemd-logind[1572]: New session 18 of user core. Apr 16 01:11:09.930956 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:01.649 [WARNING][6164] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b148b156-4c3c-440d-9a9c-de6e9bd705a3", ResourceVersion:"1332", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 7, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8c47182317a55eb587d3ae801e5fdc1d8a4293163c58eec2491e768ff44fc62", Pod:"coredns-674b8bbfcf-wt9ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40aa935c1ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:01.690 [INFO][6164] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:01.690 [INFO][6164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" iface="eth0" netns="" Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:01.690 [INFO][6164] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:01.690 [INFO][6164] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:09.220 [INFO][6181] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" HandleID="k8s-pod-network.c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:09.221 [INFO][6181] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:09.221 [INFO][6181] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:10.012 [WARNING][6181] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" HandleID="k8s-pod-network.c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:10.014 [INFO][6181] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" HandleID="k8s-pod-network.c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Workload="localhost-k8s-coredns--674b8bbfcf--wt9ng-eth0" Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:10.222 [INFO][6181] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:10.749450 containerd[1593]: 2026-04-16 01:11:10.559 [INFO][6164] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8" Apr 16 01:11:10.767764 containerd[1593]: time="2026-04-16T01:11:10.755174216Z" level=info msg="TearDown network for sandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\" successfully" Apr 16 01:11:11.239462 containerd[1593]: time="2026-04-16T01:11:11.230945434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:11:11.239462 containerd[1593]: time="2026-04-16T01:11:11.231710418Z" level=info msg="RemovePodSandbox \"c040b0a1c2d725193c57fb5cde1a286d2d60bd0874fb62f19243459d5d9d7ff8\" returns successfully" Apr 16 01:11:11.262754 containerd[1593]: time="2026-04-16T01:11:11.252621863Z" level=info msg="StopPodSandbox for \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\"" Apr 16 01:11:12.516963 sshd[6220]: pam_unix(sshd:session): session closed for user core Apr 16 01:11:12.589626 systemd[1]: sshd@17-10.0.0.62:22-10.0.0.1:51720.service: Deactivated successfully. Apr 16 01:11:12.608897 kubelet[2807]: E0416 01:11:12.606003 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:11:12.610181 systemd-logind[1572]: Session 18 logged out. Waiting for processes to exit. Apr 16 01:11:12.615361 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 01:11:12.757800 systemd-logind[1572]: Removed session 18. Apr 16 01:11:12.923644 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:11:12.918875 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:11:12.918926 systemd-resolved[1467]: Flushed all caches. Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:15.091 [WARNING][6275] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0", GenerateName:"calico-kube-controllers-c4f75b597-", Namespace:"calico-system", SelfLink:"", UID:"aebb0dae-448b-478a-a00a-811005b5982c", ResourceVersion:"1411", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4f75b597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae", Pod:"calico-kube-controllers-c4f75b597-sfg9g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibdf8a403f73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:15.092 [INFO][6275] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:15.093 [INFO][6275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" iface="eth0" netns="" Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:15.094 [INFO][6275] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:15.094 [INFO][6275] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:16.308 [INFO][6297] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" HandleID="k8s-pod-network.af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:16.310 [INFO][6297] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:16.310 [INFO][6297] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:16.533 [WARNING][6297] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" HandleID="k8s-pod-network.af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:16.592 [INFO][6297] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" HandleID="k8s-pod-network.af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:16.799 [INFO][6297] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:16.828333 containerd[1593]: 2026-04-16 01:11:16.819 [INFO][6275] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:11:16.828333 containerd[1593]: time="2026-04-16T01:11:16.828122159Z" level=info msg="TearDown network for sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\" successfully" Apr 16 01:11:16.828333 containerd[1593]: time="2026-04-16T01:11:16.828510741Z" level=info msg="StopPodSandbox for \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\" returns successfully" Apr 16 01:11:16.872856 containerd[1593]: time="2026-04-16T01:11:16.872759836Z" level=info msg="RemovePodSandbox for \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\"" Apr 16 01:11:16.872856 containerd[1593]: time="2026-04-16T01:11:16.872853331Z" level=info msg="Forcibly stopping sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\"" Apr 16 01:11:17.530176 systemd[1]: Started sshd@18-10.0.0.62:22-10.0.0.1:41220.service - OpenSSH per-connection server daemon (10.0.0.1:41220). Apr 16 01:11:17.792910 sshd[6324]: Accepted publickey for core from 10.0.0.1 port 41220 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:17.799060 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:17.822006 systemd-logind[1572]: New session 19 of user core. Apr 16 01:11:17.828575 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:17.769 [WARNING][6317] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0", GenerateName:"calico-kube-controllers-c4f75b597-", Namespace:"calico-system", SelfLink:"", UID:"aebb0dae-448b-478a-a00a-811005b5982c", ResourceVersion:"1411", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c4f75b597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf9013da637a1dd888e09480c1f4d563da59a1e90f212a745d02e6c66a4c81ae", Pod:"calico-kube-controllers-c4f75b597-sfg9g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibdf8a403f73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:17.770 [INFO][6317] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:17.770 [INFO][6317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" iface="eth0" netns="" Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:17.770 [INFO][6317] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:17.770 [INFO][6317] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:17.945 [INFO][6328] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" HandleID="k8s-pod-network.af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:17.945 [INFO][6328] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:17.945 [INFO][6328] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:17.989 [WARNING][6328] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" HandleID="k8s-pod-network.af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:17.989 [INFO][6328] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" HandleID="k8s-pod-network.af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Workload="localhost-k8s-calico--kube--controllers--c4f75b597--sfg9g-eth0" Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:18.169 [INFO][6328] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:18.203680 containerd[1593]: 2026-04-16 01:11:18.192 [INFO][6317] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0" Apr 16 01:11:18.235421 containerd[1593]: time="2026-04-16T01:11:18.204144738Z" level=info msg="TearDown network for sandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\" successfully" Apr 16 01:11:18.304574 containerd[1593]: time="2026-04-16T01:11:18.304491604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:11:18.306138 containerd[1593]: time="2026-04-16T01:11:18.306115634Z" level=info msg="RemovePodSandbox \"af30493e36c4f805621851319eb9774a0a14c84a53202e572936317e57afa6a0\" returns successfully" Apr 16 01:11:18.313420 containerd[1593]: time="2026-04-16T01:11:18.311997181Z" level=info msg="StopPodSandbox for \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\"" Apr 16 01:11:18.922876 sshd[6324]: pam_unix(sshd:session): session closed for user core Apr 16 01:11:18.945581 systemd-logind[1572]: Session 19 logged out. Waiting for processes to exit. Apr 16 01:11:18.947839 systemd[1]: sshd@18-10.0.0.62:22-10.0.0.1:41220.service: Deactivated successfully. Apr 16 01:11:18.959664 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 01:11:18.961436 systemd-logind[1572]: Removed session 19. Apr 16 01:11:19.261051 containerd[1593]: time="2026-04-16T01:11:19.260355296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:11:19.268782 containerd[1593]: time="2026-04-16T01:11:19.268643367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 16 01:11:19.270587 containerd[1593]: time="2026-04-16T01:11:19.270548559Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:11:19.294586 containerd[1593]: time="2026-04-16T01:11:19.292001146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:11:19.333841 containerd[1593]: time="2026-04-16T01:11:19.332573302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 56.063820429s" Apr 16 01:11:19.333841 containerd[1593]: time="2026-04-16T01:11:19.332846083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 16 01:11:19.482366 containerd[1593]: time="2026-04-16T01:11:19.466561823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 01:11:19.491867 containerd[1593]: time="2026-04-16T01:11:19.491690532Z" level=info msg="CreateContainer within sandbox \"3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 01:11:19.788158 kubelet[2807]: E0416 01:11:19.786516 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:11:20.668434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522067857.mount: Deactivated successfully. Apr 16 01:11:20.839856 containerd[1593]: time="2026-04-16T01:11:20.836558268Z" level=info msg="CreateContainer within sandbox \"3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1b8c5166a980177e23e9b2cb2e810fc3dd096d35dfd68c3bd6bf964b5381c6a5\"" Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:18.809 [WARNING][6355] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0", GenerateName:"calico-apiserver-68bd47d56c-", Namespace:"calico-system", SelfLink:"", UID:"0a26e0b5-baae-47de-8478-3a9191a4d5e8", ResourceVersion:"1459", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bd47d56c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb", Pod:"calico-apiserver-68bd47d56c-vlgfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia9bc4d89cc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:18.830 [INFO][6355] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:18.838 [INFO][6355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" iface="eth0" netns="" Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:18.838 [INFO][6355] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:18.838 [INFO][6355] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:19.491 [INFO][6364] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" HandleID="k8s-pod-network.7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:19.580 [INFO][6364] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:19.780 [INFO][6364] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:20.225 [WARNING][6364] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" HandleID="k8s-pod-network.7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:20.225 [INFO][6364] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" HandleID="k8s-pod-network.7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:20.515 [INFO][6364] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:20.994923 containerd[1593]: 2026-04-16 01:11:20.784 [INFO][6355] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:11:20.994923 containerd[1593]: time="2026-04-16T01:11:20.981527488Z" level=info msg="TearDown network for sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\" successfully" Apr 16 01:11:20.994923 containerd[1593]: time="2026-04-16T01:11:20.981569257Z" level=info msg="StopPodSandbox for \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\" returns successfully" Apr 16 01:11:21.002405 containerd[1593]: time="2026-04-16T01:11:20.995352825Z" level=info msg="RemovePodSandbox for \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\"" Apr 16 01:11:21.002405 containerd[1593]: time="2026-04-16T01:11:20.995475317Z" level=info msg="Forcibly stopping sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\"" Apr 16 01:11:21.002405 containerd[1593]: time="2026-04-16T01:11:20.995544395Z" level=info msg="StartContainer for \"1b8c5166a980177e23e9b2cb2e810fc3dd096d35dfd68c3bd6bf964b5381c6a5\"" Apr 16 01:11:21.568211 containerd[1593]: time="2026-04-16T01:11:21.568049614Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:11:21.600064 containerd[1593]: time="2026-04-16T01:11:21.573192968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 01:11:21.671883 containerd[1593]: time="2026-04-16T01:11:21.671687755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.190308176s" Apr 16 01:11:21.675828 containerd[1593]: time="2026-04-16T01:11:21.674882691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 01:11:21.927974 containerd[1593]: time="2026-04-16T01:11:21.924655146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 01:11:22.020708 containerd[1593]: time="2026-04-16T01:11:22.020628763Z" level=info msg="CreateContainer within sandbox \"91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 01:11:22.220925 containerd[1593]: time="2026-04-16T01:11:22.218918634Z" level=info msg="CreateContainer within sandbox \"91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"209db5cfa8ff6ba1a4465856d523a0ddb135ec4d698daf703f493c86c0567189\"" Apr 16 01:11:22.350617 containerd[1593]: time="2026-04-16T01:11:22.349921561Z" level=info msg="StartContainer for \"209db5cfa8ff6ba1a4465856d523a0ddb135ec4d698daf703f493c86c0567189\"" Apr 16 01:11:22.560693 kubelet[2807]: E0416 01:11:22.558507 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:11:23.027461 containerd[1593]: time="2026-04-16T01:11:23.026451184Z" level=info msg="StartContainer for \"1b8c5166a980177e23e9b2cb2e810fc3dd096d35dfd68c3bd6bf964b5381c6a5\" returns successfully" Apr 16 01:11:23.458147 containerd[1593]: time="2026-04-16T01:11:23.457349188Z" level=info msg="StartContainer for \"209db5cfa8ff6ba1a4465856d523a0ddb135ec4d698daf703f493c86c0567189\" returns successfully" Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:22.830 [WARNING][6399] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0", GenerateName:"calico-apiserver-68bd47d56c-", Namespace:"calico-system", SelfLink:"", UID:"0a26e0b5-baae-47de-8478-3a9191a4d5e8", ResourceVersion:"1459", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bd47d56c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb2078ba33e9fae6cb41bb3cd3b93153454136fc484449cda60629ef0543d9fb", Pod:"calico-apiserver-68bd47d56c-vlgfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia9bc4d89cc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:22.860 [INFO][6399] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:22.860 [INFO][6399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" iface="eth0" netns="" Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:22.860 [INFO][6399] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:22.860 [INFO][6399] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:23.386 [INFO][6467] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" HandleID="k8s-pod-network.7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:23.387 [INFO][6467] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:23.388 [INFO][6467] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:23.508 [WARNING][6467] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" HandleID="k8s-pod-network.7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:23.508 [INFO][6467] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" HandleID="k8s-pod-network.7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Workload="localhost-k8s-calico--apiserver--68bd47d56c--vlgfc-eth0" Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:23.630 [INFO][6467] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:23.649286 containerd[1593]: 2026-04-16 01:11:23.635 [INFO][6399] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122" Apr 16 01:11:23.649286 containerd[1593]: time="2026-04-16T01:11:23.647490375Z" level=info msg="TearDown network for sandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\" successfully" Apr 16 01:11:23.657989 containerd[1593]: time="2026-04-16T01:11:23.656075366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:11:23.660009 containerd[1593]: time="2026-04-16T01:11:23.659155430Z" level=info msg="RemovePodSandbox \"7c2e684f2d034f4a43147196abc341ef4f6767e1166c867a1cb4713c810c5122\" returns successfully" Apr 16 01:11:23.671606 containerd[1593]: time="2026-04-16T01:11:23.671469868Z" level=info msg="StopPodSandbox for \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\"" Apr 16 01:11:23.948601 systemd[1]: Started sshd@19-10.0.0.62:22-10.0.0.1:42728.service - OpenSSH per-connection server daemon (10.0.0.1:42728). Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:23.869 [WARNING][6531] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0", GenerateName:"calico-apiserver-68bd47d56c-", Namespace:"calico-system", SelfLink:"", UID:"091fc483-3bbd-4649-ab92-475b732c9825", ResourceVersion:"1323", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bd47d56c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4", Pod:"calico-apiserver-68bd47d56c-kk7cd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8693cc81378", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:23.870 [INFO][6531] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:23.870 [INFO][6531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" iface="eth0" netns="" Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:23.870 [INFO][6531] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:23.870 [INFO][6531] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:23.945 [INFO][6544] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" HandleID="k8s-pod-network.11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:23.945 [INFO][6544] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:23.945 [INFO][6544] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:23.984 [WARNING][6544] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" HandleID="k8s-pod-network.11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:23.984 [INFO][6544] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" HandleID="k8s-pod-network.11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:24.002 [INFO][6544] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:24.040816 containerd[1593]: 2026-04-16 01:11:24.015 [INFO][6531] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:11:24.046071 containerd[1593]: time="2026-04-16T01:11:24.040976362Z" level=info msg="TearDown network for sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\" successfully" Apr 16 01:11:24.046071 containerd[1593]: time="2026-04-16T01:11:24.041006232Z" level=info msg="StopPodSandbox for \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\" returns successfully" Apr 16 01:11:24.046071 containerd[1593]: time="2026-04-16T01:11:24.043884199Z" level=info msg="RemovePodSandbox for \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\"" Apr 16 01:11:24.046071 containerd[1593]: time="2026-04-16T01:11:24.043920589Z" level=info msg="Forcibly stopping sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\"" Apr 16 01:11:24.149036 sshd[6551]: Accepted publickey for core from 10.0.0.1 port 42728 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:24.156811 sshd[6551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:24.205620 systemd-logind[1572]: New session 20 of user core. Apr 16 01:11:24.217942 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 01:11:24.592584 kubelet[2807]: I0416 01:11:24.592469 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-68bd47d56c-kk7cd" podStartSLOduration=97.089836568 podStartE2EDuration="2m57.592448209s" podCreationTimestamp="2026-04-16 01:08:27 +0000 UTC" firstStartedPulling="2026-04-16 01:10:01.286412797 +0000 UTC m=+149.957842765" lastFinishedPulling="2026-04-16 01:11:21.789024453 +0000 UTC m=+230.460454406" observedRunningTime="2026-04-16 01:11:24.383001891 +0000 UTC m=+233.054431837" watchObservedRunningTime="2026-04-16 01:11:24.592448209 +0000 UTC m=+233.263878162" Apr 16 01:11:24.605993 kubelet[2807]: I0416 01:11:24.605707 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-gmdnv" podStartSLOduration=97.996798062 podStartE2EDuration="2m56.605656066s" podCreationTimestamp="2026-04-16 01:08:28 +0000 UTC" firstStartedPulling="2026-04-16 01:10:00.812601251 +0000 UTC m=+149.484031194" lastFinishedPulling="2026-04-16 01:11:19.421459246 +0000 UTC m=+228.092889198" observedRunningTime="2026-04-16 01:11:24.571338817 +0000 UTC m=+233.242768758" watchObservedRunningTime="2026-04-16 01:11:24.605656066 +0000 UTC m=+233.277086007" Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:24.818 [WARNING][6569] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0", GenerateName:"calico-apiserver-68bd47d56c-", Namespace:"calico-system", SelfLink:"", UID:"091fc483-3bbd-4649-ab92-475b732c9825", ResourceVersion:"1658", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bd47d56c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91133d74a5cbce37a8b0fab3298e055fe7ba76c92700b705bb09a078cc42fcb4", Pod:"calico-apiserver-68bd47d56c-kk7cd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8693cc81378", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:24.820 [INFO][6569] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:24.820 [INFO][6569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" iface="eth0" netns="" Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:24.820 [INFO][6569] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:24.820 [INFO][6569] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:24.943 [INFO][6605] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" HandleID="k8s-pod-network.11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:24.944 [INFO][6605] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:24.950 [INFO][6605] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:24.986 [WARNING][6605] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" HandleID="k8s-pod-network.11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:24.987 [INFO][6605] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" HandleID="k8s-pod-network.11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Workload="localhost-k8s-calico--apiserver--68bd47d56c--kk7cd-eth0" Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:25.045 [INFO][6605] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:25.080954 containerd[1593]: 2026-04-16 01:11:25.075 [INFO][6569] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085" Apr 16 01:11:25.080954 containerd[1593]: time="2026-04-16T01:11:25.080876561Z" level=info msg="TearDown network for sandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\" successfully" Apr 16 01:11:25.121493 containerd[1593]: time="2026-04-16T01:11:25.118822321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:11:25.121493 containerd[1593]: time="2026-04-16T01:11:25.118925065Z" level=info msg="RemovePodSandbox \"11095900e5fb19168cd124ed4859b00d46248bf00174f8f1b8cb64b4addab085\" returns successfully" Apr 16 01:11:25.121493 containerd[1593]: time="2026-04-16T01:11:25.120309525Z" level=info msg="StopPodSandbox for \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\"" Apr 16 01:11:25.341823 sshd[6551]: pam_unix(sshd:session): session closed for user core Apr 16 01:11:25.362566 systemd[1]: sshd@19-10.0.0.62:22-10.0.0.1:42728.service: Deactivated successfully. Apr 16 01:11:25.368507 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 01:11:25.375836 systemd-logind[1572]: Session 20 logged out. Waiting for processes to exit. Apr 16 01:11:25.378690 systemd-logind[1572]: Removed session 20. Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.333 [WARNING][6631] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--28mcm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9e305132-072a-4841-9d59-183ab9643f4e", ResourceVersion:"1294", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 7, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55", Pod:"coredns-674b8bbfcf-28mcm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94d27047b8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.334 [INFO][6631] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.334 [INFO][6631] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" iface="eth0" netns="" Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.334 [INFO][6631] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.334 [INFO][6631] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.421 [INFO][6639] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" HandleID="k8s-pod-network.0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.422 [INFO][6639] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.422 [INFO][6639] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.469 [WARNING][6639] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" HandleID="k8s-pod-network.0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.469 [INFO][6639] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" HandleID="k8s-pod-network.0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.478 [INFO][6639] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:25.490707 containerd[1593]: 2026-04-16 01:11:25.481 [INFO][6631] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:11:25.494622 containerd[1593]: time="2026-04-16T01:11:25.493366293Z" level=info msg="TearDown network for sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\" successfully" Apr 16 01:11:25.494622 containerd[1593]: time="2026-04-16T01:11:25.493461039Z" level=info msg="StopPodSandbox for \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\" returns successfully" Apr 16 01:11:25.542509 containerd[1593]: time="2026-04-16T01:11:25.538895463Z" level=info msg="RemovePodSandbox for \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\"" Apr 16 01:11:25.542509 containerd[1593]: time="2026-04-16T01:11:25.539092537Z" level=info msg="Forcibly stopping sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\"" Apr 16 01:11:26.460935 kubelet[2807]: I0416 01:11:26.460747 2807 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.063 [WARNING][6682] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--28mcm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9e305132-072a-4841-9d59-183ab9643f4e", ResourceVersion:"1294", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 7, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5690ec830c91c9dbcec603ef2cfa67d5ae40377101e5ee984fafeae6e42e7e55", Pod:"coredns-674b8bbfcf-28mcm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94d27047b8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.064 [INFO][6682] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.064 [INFO][6682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" iface="eth0" netns="" Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.064 [INFO][6682] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.064 [INFO][6682] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.188 [INFO][6693] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" HandleID="k8s-pod-network.0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.189 [INFO][6693] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.189 [INFO][6693] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.364 [WARNING][6693] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" HandleID="k8s-pod-network.0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.367 [INFO][6693] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" HandleID="k8s-pod-network.0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Workload="localhost-k8s-coredns--674b8bbfcf--28mcm-eth0" Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.380 [INFO][6693] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:26.471491 containerd[1593]: 2026-04-16 01:11:26.455 [INFO][6682] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d" Apr 16 01:11:26.471491 containerd[1593]: time="2026-04-16T01:11:26.471398422Z" level=info msg="TearDown network for sandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\" successfully" Apr 16 01:11:26.524071 containerd[1593]: time="2026-04-16T01:11:26.522581298Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:11:26.524071 containerd[1593]: time="2026-04-16T01:11:26.523173912Z" level=info msg="RemovePodSandbox \"0beec764a0c155403ccdcb6bb46297fccc0ed5b359192719a7160deb1de2229d\" returns successfully" Apr 16 01:11:26.526972 containerd[1593]: time="2026-04-16T01:11:26.524971106Z" level=info msg="StopPodSandbox for \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\"" Apr 16 01:11:27.703431 containerd[1593]: time="2026-04-16T01:11:27.702840171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:11:27.708315 containerd[1593]: time="2026-04-16T01:11:27.708067167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 01:11:27.714476 containerd[1593]: time="2026-04-16T01:11:27.714037359Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:11:27.725462 containerd[1593]: time="2026-04-16T01:11:27.725104334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 5.775510183s" Apr 16 01:11:27.725462 containerd[1593]: time="2026-04-16T01:11:27.725153369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 01:11:27.726557 containerd[1593]: time="2026-04-16T01:11:27.726453248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.216 [WARNING][6711] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gqrfc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"73d74924-8e40-46ed-8ff0-31c0cdbb144c", ResourceVersion:"1311", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225", Pod:"csi-node-driver-gqrfc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia19ad0c984a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.217 [INFO][6711] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.218 [INFO][6711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" iface="eth0" netns="" Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.218 [INFO][6711] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.218 [INFO][6711] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.534 [INFO][6720] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" HandleID="k8s-pod-network.6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.534 [INFO][6720] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.536 [INFO][6720] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.591 [WARNING][6720] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" HandleID="k8s-pod-network.6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.610 [INFO][6720] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" HandleID="k8s-pod-network.6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.712 [INFO][6720] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:27.734314 containerd[1593]: 2026-04-16 01:11:27.718 [INFO][6711] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:11:27.749069 containerd[1593]: time="2026-04-16T01:11:27.734612421Z" level=info msg="TearDown network for sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\" successfully" Apr 16 01:11:27.749069 containerd[1593]: time="2026-04-16T01:11:27.734643369Z" level=info msg="StopPodSandbox for \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\" returns successfully" Apr 16 01:11:27.749069 containerd[1593]: time="2026-04-16T01:11:27.736714154Z" level=info msg="RemovePodSandbox for \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\"" Apr 16 01:11:27.749069 containerd[1593]: time="2026-04-16T01:11:27.736749579Z" level=info msg="Forcibly stopping sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\"" Apr 16 01:11:27.821153 containerd[1593]: time="2026-04-16T01:11:27.818915176Z" level=info msg="CreateContainer within sandbox \"75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 01:11:27.909365 containerd[1593]: time="2026-04-16T01:11:27.907554535Z" level=info msg="CreateContainer within sandbox \"75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b56f5072ce6f699f56164487e4b4a767f1c7302b39cba359361fab976a13e0d8\"" Apr 16 01:11:27.935661 containerd[1593]: time="2026-04-16T01:11:27.933196219Z" level=info msg="StartContainer for \"b56f5072ce6f699f56164487e4b4a767f1c7302b39cba359361fab976a13e0d8\"" Apr 16 01:11:28.232979 systemd[1]: run-containerd-runc-k8s.io-b56f5072ce6f699f56164487e4b4a767f1c7302b39cba359361fab976a13e0d8-runc.08jkNk.mount: Deactivated successfully. Apr 16 01:11:28.426384 containerd[1593]: time="2026-04-16T01:11:28.425938302Z" level=info msg="StartContainer for \"b56f5072ce6f699f56164487e4b4a767f1c7302b39cba359361fab976a13e0d8\" returns successfully" Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.271 [WARNING][6739] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gqrfc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"73d74924-8e40-46ed-8ff0-31c0cdbb144c", ResourceVersion:"1311", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75925d48bc4ba12bad842bc68a000602ed72eae2500356952ecf88a720d05225", Pod:"csi-node-driver-gqrfc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia19ad0c984a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.271 [INFO][6739] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.271 [INFO][6739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" iface="eth0" netns="" Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.271 [INFO][6739] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.271 [INFO][6739] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.694 [INFO][6769] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" HandleID="k8s-pod-network.6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.707 [INFO][6769] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.707 [INFO][6769] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.841 [WARNING][6769] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" HandleID="k8s-pod-network.6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.842 [INFO][6769] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" HandleID="k8s-pod-network.6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Workload="localhost-k8s-csi--node--driver--gqrfc-eth0" Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.871 [INFO][6769] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:28.889140 containerd[1593]: 2026-04-16 01:11:28.882 [INFO][6739] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5" Apr 16 01:11:28.889140 containerd[1593]: time="2026-04-16T01:11:28.888688942Z" level=info msg="TearDown network for sandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\" successfully" Apr 16 01:11:28.940555 containerd[1593]: time="2026-04-16T01:11:28.939587861Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:11:28.955064 containerd[1593]: time="2026-04-16T01:11:28.942393792Z" level=info msg="RemovePodSandbox \"6e5f21c21f77bef83504a69317a8e582cb526d7ef7ce0f07c1a91de7b53674d5\" returns successfully" Apr 16 01:11:29.014699 containerd[1593]: time="2026-04-16T01:11:29.012908483Z" level=info msg="StopPodSandbox for \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\"" Apr 16 01:11:29.607012 kubelet[2807]: E0416 01:11:29.605994 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:11:29.809749 kubelet[2807]: I0416 01:11:29.808884 2807 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 01:11:29.812015 kubelet[2807]: I0416 01:11:29.811702 2807 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.388 [WARNING][6836] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--gmdnv-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe", ResourceVersion:"1662", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7", Pod:"goldmane-5b85766d88-gmdnv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1f56873766", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.390 [INFO][6836] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.390 [INFO][6836] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" iface="eth0" netns="" Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.390 [INFO][6836] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.390 [INFO][6836] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.627 [INFO][6850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" HandleID="k8s-pod-network.d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.628 [INFO][6850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.628 [INFO][6850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.828 [WARNING][6850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" HandleID="k8s-pod-network.d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.828 [INFO][6850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" HandleID="k8s-pod-network.d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.877 [INFO][6850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:29.919666 containerd[1593]: 2026-04-16 01:11:29.894 [INFO][6836] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:11:29.919666 containerd[1593]: time="2026-04-16T01:11:29.913956851Z" level=info msg="TearDown network for sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\" successfully" Apr 16 01:11:29.919666 containerd[1593]: time="2026-04-16T01:11:29.918020403Z" level=info msg="StopPodSandbox for \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\" returns successfully" Apr 16 01:11:29.923458 containerd[1593]: time="2026-04-16T01:11:29.922892070Z" level=info msg="RemovePodSandbox for \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\"" Apr 16 01:11:29.923458 containerd[1593]: time="2026-04-16T01:11:29.922918827Z" level=info msg="Forcibly stopping sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\"" Apr 16 01:11:30.371910 systemd[1]: Started sshd@20-10.0.0.62:22-10.0.0.1:48016.service - OpenSSH per-connection server daemon (10.0.0.1:48016). Apr 16 01:11:30.623431 sshd[6879]: Accepted publickey for core from 10.0.0.1 port 48016 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:30.626394 sshd[6879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:30.676557 systemd-logind[1572]: New session 21 of user core. Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.347 [WARNING][6869] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--gmdnv-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0a9c586a-4eaa-4ef4-9e4f-4ec66519adfe", ResourceVersion:"1662", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 8, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b6ef9499eb409f377fd9d16ba1c7a54bd76c4d902481364f4b62d8977403fc7", Pod:"goldmane-5b85766d88-gmdnv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1f56873766", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.351 [INFO][6869] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.351 [INFO][6869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" iface="eth0" netns="" Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.351 [INFO][6869] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.351 [INFO][6869] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.510 [INFO][6878] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" HandleID="k8s-pod-network.d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.511 [INFO][6878] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.511 [INFO][6878] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.628 [WARNING][6878] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" HandleID="k8s-pod-network.d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.628 [INFO][6878] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" HandleID="k8s-pod-network.d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Workload="localhost-k8s-goldmane--5b85766d88--gmdnv-eth0" Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.665 [INFO][6878] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:30.678679 containerd[1593]: 2026-04-16 01:11:30.674 [INFO][6869] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff" Apr 16 01:11:30.680964 containerd[1593]: time="2026-04-16T01:11:30.678911526Z" level=info msg="TearDown network for sandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\" successfully" Apr 16 01:11:30.689443 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 01:11:30.718496 containerd[1593]: time="2026-04-16T01:11:30.717746516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:11:30.718496 containerd[1593]: time="2026-04-16T01:11:30.718359616Z" level=info msg="RemovePodSandbox \"d6217836d5dbd1f2b3a798fcbff83d96d831085b5912851eb04cee2e199600ff\" returns successfully" Apr 16 01:11:30.721425 containerd[1593]: time="2026-04-16T01:11:30.721394846Z" level=info msg="StopPodSandbox for \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\"" Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:30.970 [WARNING][6900] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" WorkloadEndpoint="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:30.971 [INFO][6900] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:30.975 [INFO][6900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" iface="eth0" netns="" Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:30.976 [INFO][6900] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:30.977 [INFO][6900] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:31.092 [INFO][6915] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" HandleID="k8s-pod-network.e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Workload="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:31.103 [INFO][6915] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:31.104 [INFO][6915] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:31.177 [WARNING][6915] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" HandleID="k8s-pod-network.e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Workload="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:31.178 [INFO][6915] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" HandleID="k8s-pod-network.e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Workload="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:31.192 [INFO][6915] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:31.215931 containerd[1593]: 2026-04-16 01:11:31.207 [INFO][6900] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:11:31.215931 containerd[1593]: time="2026-04-16T01:11:31.213613330Z" level=info msg="TearDown network for sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\" successfully" Apr 16 01:11:31.215931 containerd[1593]: time="2026-04-16T01:11:31.213689984Z" level=info msg="StopPodSandbox for \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\" returns successfully" Apr 16 01:11:31.231583 containerd[1593]: time="2026-04-16T01:11:31.230536106Z" level=info msg="RemovePodSandbox for \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\"" Apr 16 01:11:31.231583 containerd[1593]: time="2026-04-16T01:11:31.230986932Z" level=info msg="Forcibly stopping sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\"" Apr 16 01:11:31.965686 kubelet[2807]: I0416 01:11:31.948962 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gqrfc" podStartSLOduration=86.08074468 podStartE2EDuration="2m53.948892646s" podCreationTimestamp="2026-04-16 01:08:38 +0000 UTC" firstStartedPulling="2026-04-16 01:09:59.867409529 +0000 UTC m=+148.538839472" lastFinishedPulling="2026-04-16 01:11:27.735557487 +0000 UTC m=+236.406987438" observedRunningTime="2026-04-16 01:11:28.824949798 +0000 UTC m=+237.496379751" watchObservedRunningTime="2026-04-16 01:11:31.948892646 +0000 UTC m=+240.620322592" Apr 16 01:11:32.537981 sshd[6879]: pam_unix(sshd:session): session closed for user core Apr 16 01:11:32.632700 systemd[1]: Started sshd@21-10.0.0.62:22-10.0.0.1:48024.service - OpenSSH per-connection server daemon (10.0.0.1:48024). Apr 16 01:11:32.636908 systemd[1]: sshd@20-10.0.0.62:22-10.0.0.1:48016.service: Deactivated successfully. Apr 16 01:11:32.701024 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 01:11:32.751360 systemd-logind[1572]: Session 21 logged out. Waiting for processes to exit. Apr 16 01:11:32.772121 systemd-logind[1572]: Removed session 21. Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:31.551 [WARNING][6935] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" WorkloadEndpoint="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:31.552 [INFO][6935] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:31.552 [INFO][6935] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" iface="eth0" netns="" Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:31.552 [INFO][6935] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:31.552 [INFO][6935] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:32.449 [INFO][6945] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" HandleID="k8s-pod-network.e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Workload="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:32.450 [INFO][6945] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:32.452 [INFO][6945] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:32.719 [WARNING][6945] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" HandleID="k8s-pod-network.e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Workload="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:32.719 [INFO][6945] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" HandleID="k8s-pod-network.e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Workload="localhost-k8s-whisker--78d845779--8pjtj-eth0" Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:32.771 [INFO][6945] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:11:32.836751 containerd[1593]: 2026-04-16 01:11:32.779 [INFO][6935] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9" Apr 16 01:11:32.836751 containerd[1593]: time="2026-04-16T01:11:32.831427258Z" level=info msg="TearDown network for sandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\" successfully" Apr 16 01:11:32.987728 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:11:32.958119 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:11:32.969481 systemd-resolved[1467]: Flushed all caches. Apr 16 01:11:33.045650 sshd[6955]: Accepted publickey for core from 10.0.0.1 port 48024 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:33.140154 containerd[1593]: time="2026-04-16T01:11:33.136576812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:11:33.140762 containerd[1593]: time="2026-04-16T01:11:33.140618643Z" level=info msg="RemovePodSandbox \"e672e4393e932e971ebe66471f006abfbf5dd8a3f066009022ee831b8b44a8a9\" returns successfully" Apr 16 01:11:33.175670 sshd[6955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:33.246467 systemd-logind[1572]: New session 22 of user core. Apr 16 01:11:33.253104 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 01:11:34.729621 sshd[6955]: pam_unix(sshd:session): session closed for user core Apr 16 01:11:34.744020 systemd[1]: Started sshd@22-10.0.0.62:22-10.0.0.1:48030.service - OpenSSH per-connection server daemon (10.0.0.1:48030). Apr 16 01:11:34.745918 systemd[1]: sshd@21-10.0.0.62:22-10.0.0.1:48024.service: Deactivated successfully. Apr 16 01:11:34.750691 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 01:11:34.750706 systemd-logind[1572]: Session 22 logged out. Waiting for processes to exit. Apr 16 01:11:34.754694 systemd-logind[1572]: Removed session 22. Apr 16 01:11:34.930732 sshd[6971]: Accepted publickey for core from 10.0.0.1 port 48030 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:35.007130 sshd[6971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:35.020885 systemd-logind[1572]: New session 23 of user core. Apr 16 01:11:35.030895 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 01:11:38.332696 sshd[6971]: pam_unix(sshd:session): session closed for user core Apr 16 01:11:38.440740 systemd[1]: Started sshd@23-10.0.0.62:22-10.0.0.1:48032.service - OpenSSH per-connection server daemon (10.0.0.1:48032). Apr 16 01:11:38.487004 systemd[1]: sshd@22-10.0.0.62:22-10.0.0.1:48030.service: Deactivated successfully. Apr 16 01:11:38.529174 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 01:11:38.531740 systemd-logind[1572]: Session 23 logged out. Waiting for processes to exit. Apr 16 01:11:38.567503 systemd-logind[1572]: Removed session 23. Apr 16 01:11:38.792091 sshd[7016]: Accepted publickey for core from 10.0.0.1 port 48032 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:38.820439 sshd[7016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:38.896635 systemd-logind[1572]: New session 24 of user core. Apr 16 01:11:38.930658 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 01:11:39.556415 kubelet[2807]: E0416 01:11:39.555720 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:11:41.033752 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:11:41.013705 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:11:41.013716 systemd-resolved[1467]: Flushed all caches. Apr 16 01:11:44.072787 sshd[7016]: pam_unix(sshd:session): session closed for user core Apr 16 01:11:44.164056 systemd[1]: Started sshd@24-10.0.0.62:22-10.0.0.1:33556.service - OpenSSH per-connection server daemon (10.0.0.1:33556). Apr 16 01:11:44.164707 systemd[1]: sshd@23-10.0.0.62:22-10.0.0.1:48032.service: Deactivated successfully. Apr 16 01:11:44.325863 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 01:11:44.369571 systemd-logind[1572]: Session 24 logged out. Waiting for processes to exit. Apr 16 01:11:44.418955 systemd-logind[1572]: Removed session 24. Apr 16 01:11:44.919486 sshd[7060]: Accepted publickey for core from 10.0.0.1 port 33556 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:44.938584 sshd[7060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:44.993843 systemd-logind[1572]: New session 25 of user core. Apr 16 01:11:45.019807 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 01:11:46.063717 sshd[7060]: pam_unix(sshd:session): session closed for user core Apr 16 01:11:46.069793 systemd[1]: sshd@24-10.0.0.62:22-10.0.0.1:33556.service: Deactivated successfully. Apr 16 01:11:46.079504 systemd-logind[1572]: Session 25 logged out. Waiting for processes to exit. Apr 16 01:11:46.080072 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 01:11:46.102358 systemd-logind[1572]: Removed session 25. Apr 16 01:11:49.588468 kubelet[2807]: E0416 01:11:49.587672 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:11:51.106212 systemd[1]: Started sshd@25-10.0.0.62:22-10.0.0.1:35912.service - OpenSSH per-connection server daemon (10.0.0.1:35912). Apr 16 01:11:51.744532 sshd[7080]: Accepted publickey for core from 10.0.0.1 port 35912 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:51.752182 sshd[7080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:51.777910 systemd-logind[1572]: New session 26 of user core. Apr 16 01:11:51.790064 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 16 01:11:52.818028 sshd[7080]: pam_unix(sshd:session): session closed for user core Apr 16 01:11:52.829480 systemd[1]: sshd@25-10.0.0.62:22-10.0.0.1:35912.service: Deactivated successfully. Apr 16 01:11:52.849913 systemd[1]: session-26.scope: Deactivated successfully. Apr 16 01:11:52.853513 systemd-logind[1572]: Session 26 logged out. Waiting for processes to exit. Apr 16 01:11:52.863529 systemd-logind[1572]: Removed session 26. Apr 16 01:11:58.023104 systemd[1]: Started sshd@26-10.0.0.62:22-10.0.0.1:35922.service - OpenSSH per-connection server daemon (10.0.0.1:35922). Apr 16 01:11:58.745740 sshd[7119]: Accepted publickey for core from 10.0.0.1 port 35922 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:11:58.759113 sshd[7119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:11:58.793846 systemd-logind[1572]: New session 27 of user core. Apr 16 01:11:58.816629 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 16 01:12:01.022764 sshd[7119]: pam_unix(sshd:session): session closed for user core Apr 16 01:12:01.052532 systemd-logind[1572]: Session 27 logged out. Waiting for processes to exit. Apr 16 01:12:01.052534 systemd[1]: sshd@26-10.0.0.62:22-10.0.0.1:35922.service: Deactivated successfully. Apr 16 01:12:01.121119 systemd[1]: session-27.scope: Deactivated successfully. Apr 16 01:12:01.162789 systemd-logind[1572]: Removed session 27. Apr 16 01:12:06.107147 systemd[1]: Started sshd@27-10.0.0.62:22-10.0.0.1:54096.service - OpenSSH per-connection server daemon (10.0.0.1:54096). Apr 16 01:12:06.699337 sshd[7139]: Accepted publickey for core from 10.0.0.1 port 54096 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:12:06.703771 sshd[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:12:06.826409 systemd-logind[1572]: New session 28 of user core. Apr 16 01:12:06.838820 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 16 01:12:07.876407 systemd[1]: run-containerd-runc-k8s.io-d6f52d8be2b846cdcf775bd4415ef52b0cac2f3db695926f13fddac5d13129d6-runc.1vKgnN.mount: Deactivated successfully. Apr 16 01:12:10.229884 sshd[7139]: pam_unix(sshd:session): session closed for user core Apr 16 01:12:10.383981 systemd[1]: sshd@27-10.0.0.62:22-10.0.0.1:54096.service: Deactivated successfully. Apr 16 01:12:10.452947 systemd-logind[1572]: Session 28 logged out. Waiting for processes to exit. Apr 16 01:12:10.459634 systemd[1]: session-28.scope: Deactivated successfully. Apr 16 01:12:10.485983 systemd-logind[1572]: Removed session 28. Apr 16 01:12:10.981656 systemd-journald[1167]: Under memory pressure, flushing caches. Apr 16 01:12:10.965352 systemd-resolved[1467]: Under memory pressure, flushing caches. Apr 16 01:12:10.965368 systemd-resolved[1467]: Flushed all caches. Apr 16 01:12:15.233664 update_engine[1577]: I20260416 01:12:15.232596 1577 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 01:12:15.233664 update_engine[1577]: I20260416 01:12:15.232864 1577 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 01:12:15.238660 update_engine[1577]: I20260416 01:12:15.236606 1577 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 01:12:15.238660 update_engine[1577]: I20260416 01:12:15.237447 1577 omaha_request_params.cc:62] Current group set to lts Apr 16 01:12:15.238660 update_engine[1577]: I20260416 01:12:15.237564 1577 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 01:12:15.238660 update_engine[1577]: I20260416 01:12:15.237571 1577 update_attempter.cc:643] Scheduling an action processor start. Apr 16 01:12:15.238660 update_engine[1577]: I20260416 01:12:15.237588 1577 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 01:12:15.238660 update_engine[1577]: I20260416 01:12:15.237627 1577 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 01:12:15.238660 update_engine[1577]: I20260416 01:12:15.237679 1577 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 01:12:15.238660 update_engine[1577]: I20260416 01:12:15.237684 1577 omaha_request_action.cc:272] Request: Apr 16 01:12:15.238660 update_engine[1577]: Apr 16 01:12:15.238660 update_engine[1577]: Apr 16 01:12:15.238660 update_engine[1577]: Apr 16 01:12:15.238660 update_engine[1577]: Apr 16 01:12:15.238660 update_engine[1577]: Apr 16 01:12:15.238660 update_engine[1577]: Apr 16 01:12:15.238660 update_engine[1577]: Apr 16 01:12:15.238660 update_engine[1577]: Apr 16 01:12:15.238660 update_engine[1577]: I20260416 01:12:15.237690 1577 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:12:15.240778 systemd[1]: Started sshd@28-10.0.0.62:22-10.0.0.1:46146.service - OpenSSH per-connection server daemon (10.0.0.1:46146). Apr 16 01:12:15.332577 locksmithd[1638]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 01:12:15.341393 update_engine[1577]: I20260416 01:12:15.332718 1577 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:12:15.343089 update_engine[1577]: I20260416 01:12:15.342553 1577 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:12:15.351583 update_engine[1577]: E20260416 01:12:15.350853 1577 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:12:15.351583 update_engine[1577]: I20260416 01:12:15.351338 1577 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 01:12:15.502425 sshd[7199]: Accepted publickey for core from 10.0.0.1 port 46146 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:12:15.522592 sshd[7199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:12:15.615449 systemd-logind[1572]: New session 29 of user core. Apr 16 01:12:15.638865 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 16 01:12:17.667896 kubelet[2807]: E0416 01:12:17.666640 2807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:12:17.689784 sshd[7199]: pam_unix(sshd:session): session closed for user core Apr 16 01:12:17.867685 systemd[1]: sshd@28-10.0.0.62:22-10.0.0.1:46146.service: Deactivated successfully. Apr 16 01:12:17.989369 systemd[1]: session-29.scope: Deactivated successfully. Apr 16 01:12:18.033547 systemd-logind[1572]: Session 29 logged out. Waiting for processes to exit. Apr 16 01:12:18.412974 systemd-logind[1572]: Removed session 29.