Apr 14 00:41:32.978837 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 00:41:32.978866 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:41:32.978880 kernel: BIOS-provided physical RAM map: Apr 14 00:41:32.978888 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 14 00:41:32.978895 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 14 00:41:32.978903 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 14 00:41:32.978913 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 14 00:41:32.978921 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 14 00:41:32.978929 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 14 00:41:32.978937 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 14 00:41:32.978948 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 14 00:41:32.978956 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 14 00:41:32.978964 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 14 00:41:32.978972 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 14 00:41:32.978983 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 14 00:41:32.978992 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 14 00:41:32.979002 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 14 00:41:32.979011 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 14 00:41:32.979020 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 14 00:41:32.979028 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 00:41:32.979036 kernel: NX (Execute Disable) protection: active Apr 14 00:41:32.979045 kernel: APIC: Static calls initialized Apr 14 00:41:32.979053 kernel: efi: EFI v2.7 by EDK II Apr 14 00:41:32.979062 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 14 00:41:32.979071 kernel: SMBIOS 2.8 present. Apr 14 00:41:32.979080 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 14 00:41:32.979088 kernel: Hypervisor detected: KVM Apr 14 00:41:32.979098 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 00:41:32.979107 kernel: kvm-clock: using sched offset of 6088628898 cycles Apr 14 00:41:32.979117 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 00:41:32.979164 kernel: tsc: Detected 2793.438 MHz processor Apr 14 00:41:32.979174 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 00:41:32.979183 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 00:41:32.979192 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 14 00:41:32.979201 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 14 00:41:32.979211 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 00:41:32.979222 kernel: Using GB pages for direct mapping Apr 14 00:41:32.979232 kernel: Secure boot disabled Apr 14 00:41:32.979241 kernel: ACPI: Early table checksum verification disabled Apr 14 00:41:32.979250 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 14 00:41:32.979264 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 14 00:41:32.979273 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:41:32.979283 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:41:32.979294 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 14 00:41:32.979304 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:41:32.979314 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:41:32.979323 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:41:32.979333 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 00:41:32.979343 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 14 00:41:32.979352 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 14 00:41:32.979364 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 14 00:41:32.979373 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 14 00:41:32.979383 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 14 00:41:32.979392 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 14 00:41:32.979401 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 14 00:41:32.979410 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 14 00:41:32.979419 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 14 00:41:32.979429 kernel: No NUMA configuration found Apr 14 00:41:32.979439 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 14 00:41:32.979451 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 14 00:41:32.979461 kernel: Zone ranges: Apr 14 00:41:32.979471 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 00:41:32.979481 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 14 00:41:32.979491 kernel: Normal empty Apr 14 00:41:32.979501 kernel: Movable zone start for each node Apr 14 00:41:32.979511 kernel: Early memory node ranges Apr 14 00:41:32.979520 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 14 00:41:32.979531 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 14 00:41:32.979540 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 14 00:41:32.979552 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 14 00:41:32.979561 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 14 00:41:32.979571 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 14 00:41:32.979581 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 14 00:41:32.979591 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 00:41:32.979601 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 14 00:41:32.979610 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 14 00:41:32.979749 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 00:41:32.979760 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 14 00:41:32.979774 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 14 00:41:32.979784 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 14 00:41:32.979794 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 00:41:32.979804 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 00:41:32.979814 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 00:41:32.979824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 00:41:32.979833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 00:41:32.979843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 00:41:32.979853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 00:41:32.979865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 00:41:32.979875 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 00:41:32.979885 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 00:41:32.979895 kernel: TSC deadline timer available Apr 14 00:41:32.979905 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 00:41:32.979915 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 00:41:32.979924 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 00:41:32.979934 kernel: kvm-guest: setup PV sched yield Apr 14 00:41:32.979944 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 14 00:41:32.979954 kernel: Booting paravirtualized kernel on KVM Apr 14 00:41:32.979966 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 00:41:32.979976 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 00:41:32.979986 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 00:41:32.979997 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 00:41:32.980007 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 00:41:32.980017 kernel: kvm-guest: PV spinlocks enabled Apr 14 00:41:32.980026 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 00:41:32.980037 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:41:32.980050 kernel: random: crng init done Apr 14 00:41:32.980059 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 00:41:32.980070 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 00:41:32.980080 kernel: Fallback order for Node 0: 0 Apr 14 00:41:32.980089 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 14 00:41:32.980099 kernel: Policy zone: DMA32 Apr 14 00:41:32.980109 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 00:41:32.980147 kernel: Memory: 2394676K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 172120K reserved, 0K cma-reserved) Apr 14 00:41:32.980159 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 00:41:32.980173 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 00:41:32.980183 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 00:41:32.980193 kernel: Dynamic Preempt: voluntary Apr 14 00:41:32.980203 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 00:41:32.980222 kernel: rcu: RCU event tracing is enabled. Apr 14 00:41:32.980235 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 00:41:32.980245 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 00:41:32.980255 kernel: Rude variant of Tasks RCU enabled. Apr 14 00:41:32.980265 kernel: Tracing variant of Tasks RCU enabled. Apr 14 00:41:32.980276 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 00:41:32.980286 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 00:41:32.980297 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 00:41:32.980311 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 00:41:32.980322 kernel: Console: colour dummy device 80x25 Apr 14 00:41:32.980333 kernel: printk: console [ttyS0] enabled Apr 14 00:41:32.980343 kernel: ACPI: Core revision 20230628 Apr 14 00:41:32.980354 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 00:41:32.980366 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 00:41:32.980377 kernel: x2apic enabled Apr 14 00:41:32.980388 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 00:41:32.980399 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 00:41:32.980410 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 00:41:32.980421 kernel: kvm-guest: setup PV IPIs Apr 14 00:41:32.980432 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 00:41:32.980443 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 00:41:32.980454 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 00:41:32.980467 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 00:41:32.980477 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 00:41:32.980488 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 00:41:32.980498 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 00:41:32.980509 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 00:41:32.980520 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 00:41:32.980531 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 00:41:32.980542 kernel: RETBleed: Vulnerable Apr 14 00:41:32.980554 kernel: Speculative Store Bypass: Vulnerable Apr 14 00:41:32.980565 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 00:41:32.980575 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 00:41:32.980586 kernel: active return thunk: its_return_thunk Apr 14 00:41:32.980597 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 00:41:32.980607 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 00:41:32.980693 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 00:41:32.980705 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 00:41:32.980716 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 00:41:32.980729 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 00:41:32.980741 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 00:41:32.980751 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 00:41:32.980762 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 00:41:32.980772 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 00:41:32.980782 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 00:41:32.980792 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 00:41:32.980802 kernel: Freeing SMP alternatives memory: 32K Apr 14 00:41:32.980812 kernel: pid_max: default: 32768 minimum: 301 Apr 14 00:41:32.980825 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 00:41:32.980835 kernel: landlock: Up and running. Apr 14 00:41:32.980846 kernel: SELinux: Initializing. Apr 14 00:41:32.980855 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 00:41:32.980865 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 00:41:32.980876 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 00:41:32.980915 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:41:32.980926 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:41:32.980936 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 00:41:32.980949 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 00:41:32.980959 kernel: signal: max sigframe size: 3632 Apr 14 00:41:32.980969 kernel: rcu: Hierarchical SRCU implementation. Apr 14 00:41:32.980980 kernel: rcu: Max phase no-delay instances is 400. Apr 14 00:41:32.980990 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 00:41:32.980999 kernel: smp: Bringing up secondary CPUs ... Apr 14 00:41:32.981009 kernel: smpboot: x86: Booting SMP configuration: Apr 14 00:41:32.981019 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 00:41:32.981028 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 00:41:32.981041 kernel: smpboot: Max logical packages: 1 Apr 14 00:41:32.981050 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 00:41:32.981060 kernel: devtmpfs: initialized Apr 14 00:41:32.981070 kernel: x86/mm: Memory block size: 128MB Apr 14 00:41:32.981079 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 14 00:41:32.981090 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 14 00:41:32.981100 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 14 00:41:32.981110 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 14 00:41:32.981185 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 14 00:41:32.981204 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 00:41:32.981215 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 00:41:32.981225 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 00:41:32.981235 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 00:41:32.981246 kernel: audit: initializing netlink subsys (disabled) Apr 14 00:41:32.981255 kernel: audit: type=2000 audit(1776127292.039:1): state=initialized audit_enabled=0 res=1 Apr 14 00:41:32.981265 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 00:41:32.981275 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 00:41:32.981285 kernel: cpuidle: using governor menu Apr 14 00:41:32.981298 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 00:41:32.981308 kernel: dca service started, version 1.12.1 Apr 14 00:41:32.981318 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 00:41:32.981329 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 00:41:32.981339 kernel: PCI: Using configuration type 1 for base access Apr 14 00:41:32.981350 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 00:41:32.981360 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 00:41:32.981370 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 00:41:32.981380 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 00:41:32.981392 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 00:41:32.981402 kernel: ACPI: Added _OSI(Module Device) Apr 14 00:41:32.981412 kernel: ACPI: Added _OSI(Processor Device) Apr 14 00:41:32.981421 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 00:41:32.981431 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 00:41:32.981440 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 00:41:32.981450 kernel: ACPI: Interpreter enabled Apr 14 00:41:32.981460 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 00:41:32.981470 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 00:41:32.981483 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 00:41:32.981495 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 00:41:32.981504 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 00:41:32.981514 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 00:41:32.981799 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 00:41:32.981906 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 00:41:32.981997 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 00:41:32.982015 kernel: PCI host bridge to bus 0000:00 Apr 14 00:41:32.982111 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 00:41:32.982248 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 00:41:32.982334 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 00:41:32.982417 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 00:41:32.982496 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 00:41:32.982827 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 14 00:41:32.982941 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 00:41:32.983053 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 00:41:32.983201 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 00:41:32.983302 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 14 00:41:32.983396 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 14 00:41:32.983489 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 14 00:41:32.983581 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 14 00:41:32.983750 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 00:41:32.983858 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 00:41:32.983954 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 14 00:41:32.984045 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 14 00:41:32.984177 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 14 00:41:32.984284 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 00:41:32.984385 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 14 00:41:32.984480 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 14 00:41:32.984575 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 14 00:41:32.984742 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 00:41:32.984846 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 14 00:41:32.984941 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 14 00:41:32.985035 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 14 00:41:32.985226 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 14 00:41:32.985353 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 00:41:32.985447 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 00:41:32.985540 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 00:41:32.985777 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 14 00:41:32.985883 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 14 00:41:32.986194 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 00:41:32.986343 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 14 00:41:32.986358 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 00:41:32.986369 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 00:41:32.986379 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 00:41:32.986389 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 00:41:32.986400 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 00:41:32.986410 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 00:41:32.986420 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 00:41:32.986434 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 00:41:32.986444 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 00:41:32.986454 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 00:41:32.986465 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 00:41:32.986475 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 00:41:32.986486 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 00:41:32.986496 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 00:41:32.986507 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 00:41:32.986517 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 00:41:32.986591 kernel: iommu: Default domain type: Translated Apr 14 00:41:32.986603 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 00:41:32.986614 kernel: efivars: Registered efivars operations Apr 14 00:41:32.986664 kernel: PCI: Using ACPI for IRQ routing Apr 14 00:41:32.986674 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 00:41:32.986684 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 14 00:41:32.986694 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 14 00:41:32.986704 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 14 00:41:32.986714 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 14 00:41:32.986826 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 00:41:32.986922 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 00:41:32.987016 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 00:41:32.987031 kernel: vgaarb: loaded Apr 14 00:41:32.987043 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 00:41:32.987054 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 00:41:32.987065 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 00:41:32.987075 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 00:41:32.987085 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 00:41:32.987100 kernel: pnp: PnP ACPI init Apr 14 00:41:32.987245 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 00:41:32.987262 kernel: pnp: PnP ACPI: found 6 devices Apr 14 00:41:32.987273 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 00:41:32.987284 kernel: NET: Registered PF_INET protocol family Apr 14 00:41:32.987295 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 00:41:32.987306 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 00:41:32.987317 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 00:41:32.987332 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 00:41:32.987343 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 00:41:32.987354 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 00:41:32.987364 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 00:41:32.987374 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 00:41:32.987384 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 00:41:32.987394 kernel: NET: Registered PF_XDP protocol family Apr 14 00:41:32.987491 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 14 00:41:32.987593 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 14 00:41:32.987785 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 00:41:32.987866 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 00:41:32.987944 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 00:41:32.988019 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 00:41:32.988099 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 00:41:32.988222 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 14 00:41:32.988264 kernel: PCI: CLS 0 bytes, default 64 Apr 14 00:41:32.988402 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 00:41:32.988432 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 00:41:32.988443 kernel: Initialise system trusted keyrings Apr 14 00:41:32.988454 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 00:41:32.988464 kernel: Key type asymmetric registered Apr 14 00:41:32.988474 kernel: Asymmetric key parser 'x509' registered Apr 14 00:41:32.988485 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 00:41:32.988495 kernel: io scheduler mq-deadline registered Apr 14 00:41:32.988505 kernel: io scheduler kyber registered Apr 14 00:41:32.988518 kernel: io scheduler bfq registered Apr 14 00:41:32.988529 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 00:41:32.988540 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 00:41:32.988551 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 00:41:32.988562 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 00:41:32.988573 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 00:41:32.988584 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 00:41:32.988595 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 00:41:32.988606 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 00:41:32.988661 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 00:41:32.988776 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 00:41:32.988865 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 00:41:32.988879 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 00:41:32.988965 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T00:41:32 UTC (1776127292) Apr 14 00:41:32.989051 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 14 00:41:32.989065 kernel: intel_pstate: CPU model not supported Apr 14 00:41:32.989075 kernel: efifb: probing for efifb Apr 14 00:41:32.989090 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 14 00:41:32.989101 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 14 00:41:32.989111 kernel: efifb: scrolling: redraw Apr 14 00:41:32.989160 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 14 00:41:32.989171 kernel: Console: switching to colour frame buffer device 100x37 Apr 14 00:41:32.989182 kernel: fb0: EFI VGA frame buffer device Apr 14 00:41:32.989209 kernel: pstore: Using crash dump compression: deflate Apr 14 00:41:32.989222 kernel: pstore: Registered efi_pstore as persistent store backend Apr 14 00:41:32.989233 kernel: NET: Registered PF_INET6 protocol family Apr 14 00:41:32.989246 kernel: Segment Routing with IPv6 Apr 14 00:41:32.989258 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 00:41:32.989269 kernel: NET: Registered PF_PACKET protocol family Apr 14 00:41:32.989280 kernel: Key type dns_resolver registered Apr 14 00:41:32.989290 kernel: IPI shorthand broadcast: enabled Apr 14 00:41:32.989301 kernel: sched_clock: Marking stable (1106091409, 353427186)->(1664142252, -204623657) Apr 14 00:41:32.989311 kernel: registered taskstats version 1 Apr 14 00:41:32.989322 kernel: Loading compiled-in X.509 certificates Apr 14 00:41:32.989332 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 00:41:32.989345 kernel: Key type .fscrypt registered Apr 14 00:41:32.989355 kernel: Key type fscrypt-provisioning registered Apr 14 00:41:32.989365 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 00:41:32.989376 kernel: ima: Allocated hash algorithm: sha1 Apr 14 00:41:32.989387 kernel: ima: No architecture policies found Apr 14 00:41:32.989397 kernel: clk: Disabling unused clocks Apr 14 00:41:32.989408 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 00:41:32.989418 kernel: Write protecting the kernel read-only data: 36864k Apr 14 00:41:32.989430 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 00:41:32.989445 kernel: Run /init as init process Apr 14 00:41:32.989454 kernel: with arguments: Apr 14 00:41:32.989465 kernel: /init Apr 14 00:41:32.989475 kernel: with environment: Apr 14 00:41:32.989485 kernel: HOME=/ Apr 14 00:41:32.989495 kernel: TERM=linux Apr 14 00:41:32.989509 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 00:41:32.989524 systemd[1]: Detected virtualization kvm. Apr 14 00:41:32.989538 systemd[1]: Detected architecture x86-64. Apr 14 00:41:32.989549 systemd[1]: Running in initrd. Apr 14 00:41:32.989560 systemd[1]: No hostname configured, using default hostname. Apr 14 00:41:32.989572 systemd[1]: Hostname set to . Apr 14 00:41:32.989584 systemd[1]: Initializing machine ID from VM UUID. Apr 14 00:41:32.989597 systemd[1]: Queued start job for default target initrd.target. Apr 14 00:41:32.989609 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:41:32.989677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:41:32.989690 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 00:41:32.989701 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 00:41:32.989712 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 00:41:32.989724 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 00:41:32.989740 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 00:41:32.989752 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 00:41:32.989763 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:41:32.989775 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:41:32.989788 systemd[1]: Reached target paths.target - Path Units. Apr 14 00:41:32.989800 systemd[1]: Reached target slices.target - Slice Units. Apr 14 00:41:32.989812 systemd[1]: Reached target swap.target - Swaps. Apr 14 00:41:32.989825 systemd[1]: Reached target timers.target - Timer Units. Apr 14 00:41:32.989839 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 00:41:32.989851 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 00:41:32.989863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 00:41:32.989876 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 00:41:32.989887 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:41:32.989899 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 00:41:32.989911 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:41:32.989924 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 00:41:32.989936 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 00:41:32.989950 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 00:41:32.989962 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 00:41:32.989973 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 00:41:32.989986 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 00:41:32.990002 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 00:41:32.990016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:41:32.990030 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 00:41:32.990068 systemd-journald[194]: Collecting audit messages is disabled. Apr 14 00:41:32.990100 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:41:32.990110 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 00:41:32.990162 systemd-journald[194]: Journal started Apr 14 00:41:32.990187 systemd-journald[194]: Runtime Journal (/run/log/journal/0675c2d7c08a4321955fbc25f713962f) is 6.0M, max 48.3M, 42.2M free. Apr 14 00:41:32.993723 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 00:41:32.993601 systemd-modules-load[195]: Inserted module 'overlay' Apr 14 00:41:32.996713 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:41:33.014994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:41:33.016541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 00:41:33.030208 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 00:41:33.030509 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:41:33.036764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 00:41:33.054242 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 00:41:33.049951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:41:33.051354 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:41:33.060150 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 14 00:41:33.060700 kernel: Bridge firewalling registered Apr 14 00:41:33.061256 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 00:41:33.075173 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:41:33.078344 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:41:33.082174 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 00:41:33.099668 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:41:33.107422 dracut-cmdline[230]: dracut-dracut-053 Apr 14 00:41:33.116285 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 00:41:33.110957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 00:41:33.159337 systemd-resolved[239]: Positive Trust Anchors: Apr 14 00:41:33.159353 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 00:41:33.159388 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 00:41:33.162519 systemd-resolved[239]: Defaulting to hostname 'linux'. Apr 14 00:41:33.163570 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 00:41:33.164492 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:41:33.227695 kernel: SCSI subsystem initialized Apr 14 00:41:33.237722 kernel: Loading iSCSI transport class v2.0-870. Apr 14 00:41:33.249735 kernel: iscsi: registered transport (tcp) Apr 14 00:41:33.279434 kernel: iscsi: registered transport (qla4xxx) Apr 14 00:41:33.279523 kernel: QLogic iSCSI HBA Driver Apr 14 00:41:33.336822 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 00:41:33.393914 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 00:41:33.421031 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 00:41:33.421110 kernel: device-mapper: uevent: version 1.0.3 Apr 14 00:41:33.423074 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 00:41:33.466691 kernel: raid6: avx512x4 gen() 42596 MB/s Apr 14 00:41:33.483697 kernel: raid6: avx512x2 gen() 41184 MB/s Apr 14 00:41:33.500728 kernel: raid6: avx512x1 gen() 38954 MB/s Apr 14 00:41:33.517710 kernel: raid6: avx2x4 gen() 34855 MB/s Apr 14 00:41:33.534751 kernel: raid6: avx2x2 gen() 34693 MB/s Apr 14 00:41:33.552985 kernel: raid6: avx2x1 gen() 25999 MB/s Apr 14 00:41:33.553053 kernel: raid6: using algorithm avx512x4 gen() 42596 MB/s Apr 14 00:41:33.571870 kernel: raid6: .... xor() 9590 MB/s, rmw enabled Apr 14 00:41:33.571921 kernel: raid6: using avx512x2 recovery algorithm Apr 14 00:41:33.592726 kernel: xor: automatically using best checksumming function avx Apr 14 00:41:33.750742 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 00:41:33.763699 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 00:41:33.771078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:41:33.787616 systemd-udevd[417]: Using default interface naming scheme 'v255'. Apr 14 00:41:33.790933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:41:33.805855 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 00:41:33.819018 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Apr 14 00:41:33.851830 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 00:41:33.867871 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 00:41:33.899716 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:41:33.910896 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 00:41:33.923417 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 00:41:33.930315 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 00:41:33.933798 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:41:33.938203 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 00:41:33.955772 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 00:41:33.956942 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 00:41:33.966874 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 00:41:33.972036 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 00:41:33.975201 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 00:41:33.975330 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:41:33.985817 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 00:41:33.985882 kernel: GPT:9289727 != 19775487 Apr 14 00:41:33.985893 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 00:41:33.985900 kernel: GPT:9289727 != 19775487 Apr 14 00:41:33.985907 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 00:41:33.985922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:41:33.991540 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:41:34.000385 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:41:34.000756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:41:34.004219 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:41:34.022655 kernel: libata version 3.00 loaded. Apr 14 00:41:34.030215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:41:34.044941 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 00:41:34.044960 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 00:41:34.045103 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 00:41:34.045112 kernel: AES CTR mode by8 optimization enabled Apr 14 00:41:34.034987 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 00:41:34.052943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:41:34.060573 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 00:41:34.060826 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 00:41:34.065748 kernel: scsi host0: ahci Apr 14 00:41:34.070789 kernel: scsi host1: ahci Apr 14 00:41:34.077424 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (463) Apr 14 00:41:34.077458 kernel: scsi host2: ahci Apr 14 00:41:34.082840 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 00:41:34.089106 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Apr 14 00:41:34.089190 kernel: scsi host3: ahci Apr 14 00:41:34.089371 kernel: scsi host4: ahci Apr 14 00:41:34.104579 kernel: scsi host5: ahci Apr 14 00:41:34.104878 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 14 00:41:34.104895 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 14 00:41:34.104907 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 14 00:41:34.104928 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 14 00:41:34.104940 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 14 00:41:34.104951 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 14 00:41:34.112832 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 00:41:34.121900 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 00:41:34.124859 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 00:41:34.127167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 00:41:34.127554 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 00:41:34.142892 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 00:41:34.144668 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:41:34.163264 disk-uuid[579]: Primary Header is updated. Apr 14 00:41:34.163264 disk-uuid[579]: Secondary Entries is updated. Apr 14 00:41:34.163264 disk-uuid[579]: Secondary Header is updated. Apr 14 00:41:34.172930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:41:34.414717 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 00:41:34.420657 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 00:41:34.420728 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 00:41:34.420738 kernel: ata3.00: applying bridge limits Apr 14 00:41:34.424727 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 00:41:34.424786 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 00:41:34.426665 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 00:41:34.431705 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 00:41:34.431740 kernel: ata3.00: configured for UDMA/100 Apr 14 00:41:34.435714 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 00:41:34.490893 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 00:41:34.491219 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 00:41:34.503708 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 00:41:35.184673 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 00:41:35.184849 disk-uuid[580]: The operation has completed successfully. Apr 14 00:41:35.222995 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 00:41:35.223242 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 00:41:35.305984 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 00:41:35.316457 sh[597]: Success Apr 14 00:41:35.335706 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 00:41:35.384037 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 00:41:35.408436 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 00:41:35.412120 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 00:41:35.437810 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 00:41:35.437931 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:41:35.437946 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 00:41:35.445856 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 00:41:35.445928 kernel: BTRFS info (device dm-0): using free space tree Apr 14 00:41:35.468460 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 00:41:35.469575 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 00:41:35.488308 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 00:41:35.496858 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 00:41:35.514396 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:41:35.514493 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:41:35.514514 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:41:35.522668 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:41:35.538001 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 00:41:35.543420 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:41:35.551961 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 00:41:35.564984 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 00:41:35.644122 ignition[686]: Ignition 2.19.0 Apr 14 00:41:35.644178 ignition[686]: Stage: fetch-offline Apr 14 00:41:35.644204 ignition[686]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:41:35.644210 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:41:35.644303 ignition[686]: parsed url from cmdline: "" Apr 14 00:41:35.644307 ignition[686]: no config URL provided Apr 14 00:41:35.644312 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 00:41:35.644321 ignition[686]: no config at "/usr/lib/ignition/user.ign" Apr 14 00:41:35.644351 ignition[686]: op(1): [started] loading QEMU firmware config module Apr 14 00:41:35.644357 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 00:41:35.671294 ignition[686]: op(1): [finished] loading QEMU firmware config module Apr 14 00:41:35.672162 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 00:41:35.703016 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 00:41:35.738433 systemd-networkd[785]: lo: Link UP Apr 14 00:41:35.738511 systemd-networkd[785]: lo: Gained carrier Apr 14 00:41:35.740278 systemd-networkd[785]: Enumeration completed Apr 14 00:41:35.741450 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:41:35.741453 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 00:41:35.741815 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 00:41:35.743774 systemd-networkd[785]: eth0: Link UP Apr 14 00:41:35.743777 systemd-networkd[785]: eth0: Gained carrier Apr 14 00:41:35.743784 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:41:35.749349 systemd[1]: Reached target network.target - Network. Apr 14 00:41:35.773928 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 00:41:35.923597 ignition[686]: parsing config with SHA512: 69a160bc54fc763314273dde70b56872fa98bef6fc8bca91b0e9068b252b9a50fc8745dfba76a34b6747cfb13f015a59683d78933620c106a1389f25cf370929 Apr 14 00:41:35.951969 kernel: hrtimer: interrupt took 3676953 ns Apr 14 00:41:35.987922 unknown[686]: fetched base config from "system" Apr 14 00:41:35.987953 unknown[686]: fetched user config from "qemu" Apr 14 00:41:35.989200 ignition[686]: fetch-offline: fetch-offline passed Apr 14 00:41:35.989306 ignition[686]: Ignition finished successfully Apr 14 00:41:36.002357 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 00:41:36.002879 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 00:41:36.035486 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 00:41:36.071412 ignition[791]: Ignition 2.19.0 Apr 14 00:41:36.071578 ignition[791]: Stage: kargs Apr 14 00:41:36.071920 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:41:36.071946 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:41:36.074059 ignition[791]: kargs: kargs passed Apr 14 00:41:36.074130 ignition[791]: Ignition finished successfully Apr 14 00:41:36.087853 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 00:41:36.112429 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 00:41:36.135903 ignition[799]: Ignition 2.19.0 Apr 14 00:41:36.135940 ignition[799]: Stage: disks Apr 14 00:41:36.136207 ignition[799]: no configs at "/usr/lib/ignition/base.d" Apr 14 00:41:36.136219 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:41:36.137835 ignition[799]: disks: disks passed Apr 14 00:41:36.137905 ignition[799]: Ignition finished successfully Apr 14 00:41:36.150226 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 00:41:36.155524 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 00:41:36.159447 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 00:41:36.164606 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 00:41:36.167746 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 00:41:36.182997 systemd[1]: Reached target basic.target - Basic System. Apr 14 00:41:36.202066 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 00:41:36.224838 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 00:41:36.231247 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 00:41:36.241887 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 00:41:36.382592 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 00:41:36.386548 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 00:41:36.390509 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 00:41:36.411963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 00:41:36.421757 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 00:41:36.477073 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 00:41:36.486764 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Apr 14 00:41:36.477254 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 00:41:36.477285 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 00:41:36.510248 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 00:41:36.514112 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:41:36.514184 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:41:36.514198 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:41:36.531698 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:41:36.534907 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 00:41:36.542553 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 00:41:36.576791 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 00:41:36.585356 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Apr 14 00:41:36.591881 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 00:41:36.599047 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 00:41:36.725059 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 00:41:36.740950 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 00:41:36.745075 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 00:41:36.760947 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 00:41:36.765394 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:41:36.786032 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 00:41:36.805540 ignition[932]: INFO : Ignition 2.19.0 Apr 14 00:41:36.805540 ignition[932]: INFO : Stage: mount Apr 14 00:41:36.809566 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:41:36.809566 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:41:36.809566 ignition[932]: INFO : mount: mount passed Apr 14 00:41:36.809566 ignition[932]: INFO : Ignition finished successfully Apr 14 00:41:36.814780 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 00:41:36.822076 systemd-networkd[785]: eth0: Gained IPv6LL Apr 14 00:41:36.839126 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 00:41:36.853190 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 00:41:36.870779 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Apr 14 00:41:36.870830 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 00:41:36.876593 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 00:41:36.876691 kernel: BTRFS info (device vda6): using free space tree Apr 14 00:41:36.885662 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 00:41:36.888821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 00:41:36.920092 ignition[962]: INFO : Ignition 2.19.0 Apr 14 00:41:36.920092 ignition[962]: INFO : Stage: files Apr 14 00:41:36.924796 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:41:36.924796 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:41:36.924796 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Apr 14 00:41:36.924796 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 00:41:36.924796 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 00:41:36.945968 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 00:41:36.945968 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 00:41:36.945968 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 00:41:36.945968 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 14 00:41:36.945968 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 14 00:41:36.945968 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 00:41:36.945968 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 00:41:36.927022 unknown[962]: wrote ssh authorized keys file for user: core Apr 14 00:41:37.007604 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 14 00:41:37.164468 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 00:41:37.164468 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:41:37.175522 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 14 00:41:37.376239 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 14 00:41:37.652918 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 00:41:37.652918 ignition[962]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 14 00:41:37.668087 ignition[962]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 00:41:37.749556 ignition[962]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 00:41:37.749556 ignition[962]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 00:41:37.749556 ignition[962]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 00:41:37.749556 ignition[962]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 14 00:41:37.749556 ignition[962]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 00:41:37.749556 ignition[962]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 00:41:37.749556 ignition[962]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 00:41:37.749556 ignition[962]: INFO : files: files passed Apr 14 00:41:37.749556 ignition[962]: INFO : Ignition finished successfully Apr 14 00:41:37.722772 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 00:41:37.762409 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 00:41:37.776956 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 00:41:37.790983 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 00:41:37.894470 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 00:41:37.791207 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 00:41:37.905403 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:41:37.905403 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:41:37.918225 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 00:41:37.921497 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 00:41:37.930856 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 00:41:37.957081 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 00:41:37.998370 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 00:41:37.998803 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 00:41:38.007116 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 00:41:38.011309 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 00:41:38.018458 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 00:41:38.047956 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 00:41:38.069518 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 00:41:38.094487 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 00:41:38.112571 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:41:38.116864 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:41:38.117214 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 00:41:38.126920 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 00:41:38.127107 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 00:41:38.143947 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 00:41:38.151480 systemd[1]: Stopped target basic.target - Basic System. Apr 14 00:41:38.155299 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 00:41:38.160809 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 00:41:38.167034 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 00:41:38.174522 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 00:41:38.183220 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 00:41:38.191309 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 00:41:38.201949 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 00:41:38.202368 systemd[1]: Stopped target swap.target - Swaps. Apr 14 00:41:38.209442 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 00:41:38.209778 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 00:41:38.222694 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:41:38.227720 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:41:38.237465 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 00:41:38.242116 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:41:38.256278 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 00:41:38.256948 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 00:41:38.267870 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 00:41:38.268233 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 00:41:38.279364 systemd[1]: Stopped target paths.target - Path Units. Apr 14 00:41:38.282233 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 00:41:38.287905 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:41:38.294326 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 00:41:38.307490 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 00:41:38.308243 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 00:41:38.308372 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 00:41:38.313863 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 00:41:38.314091 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 00:41:38.322849 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 00:41:38.323030 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 00:41:38.333951 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 00:41:38.334412 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 00:41:38.359240 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 00:41:38.369538 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 00:41:38.376760 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 00:41:38.376926 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:41:38.391489 ignition[1016]: INFO : Ignition 2.19.0 Apr 14 00:41:38.391489 ignition[1016]: INFO : Stage: umount Apr 14 00:41:38.400673 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 00:41:38.400673 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 00:41:38.400673 ignition[1016]: INFO : umount: umount passed Apr 14 00:41:38.400673 ignition[1016]: INFO : Ignition finished successfully Apr 14 00:41:38.397731 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 00:41:38.397978 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 00:41:38.486480 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 00:41:38.488864 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 00:41:38.489215 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 00:41:38.497457 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 00:41:38.497692 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 00:41:38.511132 systemd[1]: Stopped target network.target - Network. Apr 14 00:41:38.511391 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 00:41:38.511503 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 00:41:38.523071 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 00:41:38.523208 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 00:41:38.537083 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 00:41:38.537265 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 00:41:38.544844 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 00:41:38.544913 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 00:41:38.551096 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 00:41:38.563838 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 00:41:38.575551 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 00:41:38.575738 systemd-networkd[785]: eth0: DHCPv6 lease lost Apr 14 00:41:38.575870 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 00:41:38.581817 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 00:41:38.582065 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 00:41:38.594254 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 00:41:38.594488 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 00:41:38.604835 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 00:41:38.605030 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:41:38.611029 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 00:41:38.611097 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 00:41:38.635355 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 00:41:38.642000 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 00:41:38.642235 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 00:41:38.648569 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 00:41:38.648826 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:41:38.659215 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 00:41:38.659301 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 00:41:38.669955 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 00:41:38.670027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:41:38.670488 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:41:38.704021 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 00:41:38.704385 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 00:41:38.713986 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 00:41:38.714348 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:41:38.725944 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 00:41:38.726017 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 00:41:38.731358 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 00:41:38.731423 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:41:38.736076 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 00:41:38.736179 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 00:41:38.751512 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 00:41:38.751591 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 00:41:38.764363 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 00:41:38.764443 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 00:41:38.815082 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 00:41:38.818987 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 00:41:38.819079 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:41:38.829958 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 14 00:41:38.830029 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:41:38.838922 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 00:41:38.839050 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:41:38.850087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:41:38.850275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:41:38.862763 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 00:41:38.863032 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 00:41:38.881324 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 00:41:38.903467 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 00:41:38.988216 systemd[1]: Switching root. Apr 14 00:41:39.036265 systemd-journald[194]: Journal stopped Apr 14 00:41:40.564341 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 14 00:41:40.564435 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 00:41:40.564454 kernel: SELinux: policy capability open_perms=1 Apr 14 00:41:40.564467 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 00:41:40.564480 kernel: SELinux: policy capability always_check_network=0 Apr 14 00:41:40.564498 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 00:41:40.564512 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 00:41:40.564525 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 00:41:40.564538 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 00:41:40.564553 kernel: audit: type=1403 audit(1776127299.341:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 00:41:40.564578 systemd[1]: Successfully loaded SELinux policy in 64.986ms. Apr 14 00:41:40.564600 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.419ms. Apr 14 00:41:40.564615 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 00:41:40.564685 systemd[1]: Detected virtualization kvm. Apr 14 00:41:40.564697 systemd[1]: Detected architecture x86-64. Apr 14 00:41:40.564705 systemd[1]: Detected first boot. Apr 14 00:41:40.564713 systemd[1]: Initializing machine ID from VM UUID. Apr 14 00:41:40.564721 zram_generator::config[1078]: No configuration found. Apr 14 00:41:40.564736 systemd[1]: Populated /etc with preset unit settings. Apr 14 00:41:40.564744 systemd[1]: Queued start job for default target multi-user.target. Apr 14 00:41:40.564752 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 00:41:40.564761 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 00:41:40.564768 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 00:41:40.564776 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 00:41:40.564784 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 00:41:40.564792 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 00:41:40.564802 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 00:41:40.564815 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 00:41:40.564828 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 00:41:40.564838 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 00:41:40.564850 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 00:41:40.564858 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 00:41:40.564874 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 00:41:40.564888 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 00:41:40.564900 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 00:41:40.564913 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 00:41:40.564928 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 00:41:40.564943 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 00:41:40.564956 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 00:41:40.564971 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 00:41:40.564985 systemd[1]: Reached target slices.target - Slice Units. Apr 14 00:41:40.564999 systemd[1]: Reached target swap.target - Swaps. Apr 14 00:41:40.565012 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 00:41:40.565026 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 00:41:40.565042 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 00:41:40.565056 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 00:41:40.565069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 00:41:40.565083 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 00:41:40.565096 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 00:41:40.565109 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 00:41:40.565123 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 00:41:40.565136 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 00:41:40.565192 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 00:41:40.565210 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:41:40.565224 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 00:41:40.565237 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 00:41:40.565250 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 00:41:40.565264 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 00:41:40.565277 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:41:40.565292 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 00:41:40.565305 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 00:41:40.565319 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:41:40.565335 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 00:41:40.565349 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:41:40.565362 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 00:41:40.565374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:41:40.565382 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 00:41:40.565390 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 14 00:41:40.565401 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 14 00:41:40.565409 kernel: fuse: init (API version 7.39) Apr 14 00:41:40.565421 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 00:41:40.565433 kernel: loop: module loaded Apr 14 00:41:40.565443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 00:41:40.565451 kernel: ACPI: bus type drm_connector registered Apr 14 00:41:40.565458 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 00:41:40.565467 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 00:41:40.565492 systemd-journald[1170]: Collecting audit messages is disabled. Apr 14 00:41:40.565520 systemd-journald[1170]: Journal started Apr 14 00:41:40.565544 systemd-journald[1170]: Runtime Journal (/run/log/journal/0675c2d7c08a4321955fbc25f713962f) is 6.0M, max 48.3M, 42.2M free. Apr 14 00:41:40.572695 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 00:41:40.582810 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:41:40.589308 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 00:41:40.590458 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 00:41:40.593816 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 00:41:40.596853 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 00:41:40.600725 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 00:41:40.604403 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 00:41:40.608273 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 00:41:40.612062 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 00:41:40.616438 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 00:41:40.620855 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 00:41:40.621299 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 00:41:40.625586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:41:40.625865 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:41:40.630134 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 00:41:40.630610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 00:41:40.635355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:41:40.636112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:41:40.641504 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 00:41:40.641923 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 00:41:40.646583 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:41:40.646906 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:41:40.650856 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 00:41:40.654951 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 00:41:40.660374 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 00:41:40.664838 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 00:41:40.684139 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 00:41:40.704245 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 00:41:40.711612 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 00:41:40.715102 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 00:41:40.718477 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 00:41:40.723831 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 00:41:40.727853 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 00:41:40.731842 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 00:41:40.736526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 00:41:40.737906 systemd-journald[1170]: Time spent on flushing to /var/log/journal/0675c2d7c08a4321955fbc25f713962f is 35.443ms for 981 entries. Apr 14 00:41:40.737906 systemd-journald[1170]: System Journal (/var/log/journal/0675c2d7c08a4321955fbc25f713962f) is 8.0M, max 195.6M, 187.6M free. Apr 14 00:41:40.796544 systemd-journald[1170]: Received client request to flush runtime journal. Apr 14 00:41:40.740838 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 00:41:40.747737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 00:41:40.753854 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 00:41:40.761393 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 00:41:40.767345 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 00:41:40.772767 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 00:41:40.780356 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 00:41:40.793974 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 14 00:41:40.803388 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 00:41:40.813368 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 00:41:40.816018 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 14 00:41:40.816053 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 14 00:41:40.823320 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 00:41:40.840071 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 00:41:40.895736 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 00:41:40.914368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 00:41:40.988531 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Apr 14 00:41:40.988990 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Apr 14 00:41:40.994297 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 00:41:41.618329 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 00:41:41.641045 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 00:41:41.686405 systemd-udevd[1242]: Using default interface naming scheme 'v255'. Apr 14 00:41:41.732553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 00:41:41.750959 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 00:41:41.769885 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 00:41:41.793450 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1259) Apr 14 00:41:41.788173 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 14 00:41:41.846106 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 00:41:41.855367 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 00:41:41.903189 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 14 00:41:41.983688 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 14 00:41:41.988982 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 00:41:41.991077 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 00:41:41.991210 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 00:41:41.991281 kernel: ACPI: button: Power Button [PWRF] Apr 14 00:41:41.998701 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 14 00:41:41.999284 systemd-networkd[1257]: lo: Link UP Apr 14 00:41:41.999315 systemd-networkd[1257]: lo: Gained carrier Apr 14 00:41:42.006485 systemd-networkd[1257]: Enumeration completed Apr 14 00:41:42.007314 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 00:41:42.007500 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:41:42.007506 systemd-networkd[1257]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 00:41:42.009822 systemd-networkd[1257]: eth0: Link UP Apr 14 00:41:42.009843 systemd-networkd[1257]: eth0: Gained carrier Apr 14 00:41:42.009858 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 00:41:42.027792 systemd-networkd[1257]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 00:41:42.027916 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 00:41:42.039206 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 00:41:42.048093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:41:42.065806 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 00:41:42.066291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:41:42.083878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 00:41:42.209111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 00:41:42.308484 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 00:41:42.330939 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 00:41:42.347326 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 00:41:42.385592 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 00:41:42.392052 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 00:41:42.411061 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 00:41:42.425688 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 00:41:42.461558 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 00:41:42.467013 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 00:41:42.473034 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 00:41:42.473104 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 00:41:42.476334 systemd[1]: Reached target machines.target - Containers. Apr 14 00:41:42.481966 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 00:41:42.503593 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 00:41:42.512957 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 00:41:42.516614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:41:42.519821 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 00:41:42.526738 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 00:41:42.535852 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 00:41:42.546467 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 00:41:42.560766 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 00:41:42.575439 kernel: loop0: detected capacity change from 0 to 228704 Apr 14 00:41:42.579008 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 00:41:42.582779 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 00:41:42.605664 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 00:41:42.647267 kernel: loop1: detected capacity change from 0 to 142488 Apr 14 00:41:42.723701 kernel: loop2: detected capacity change from 0 to 140768 Apr 14 00:41:42.830729 kernel: loop3: detected capacity change from 0 to 228704 Apr 14 00:41:42.864726 kernel: loop4: detected capacity change from 0 to 142488 Apr 14 00:41:42.905702 kernel: loop5: detected capacity change from 0 to 140768 Apr 14 00:41:42.925766 (sd-merge)[1316]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 00:41:42.926230 (sd-merge)[1316]: Merged extensions into '/usr'. Apr 14 00:41:42.930881 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 00:41:42.930914 systemd[1]: Reloading... Apr 14 00:41:43.003726 zram_generator::config[1344]: No configuration found. Apr 14 00:41:43.094202 systemd-networkd[1257]: eth0: Gained IPv6LL Apr 14 00:41:43.196770 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 00:41:43.201259 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:41:43.304419 systemd[1]: Reloading finished in 373 ms. Apr 14 00:41:43.375837 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 00:41:43.381946 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 00:41:43.389331 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 00:41:43.419834 systemd[1]: Starting ensure-sysext.service... Apr 14 00:41:43.426864 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 00:41:43.436786 systemd[1]: Reloading requested from client PID 1390 ('systemctl') (unit ensure-sysext.service)... Apr 14 00:41:43.436854 systemd[1]: Reloading... Apr 14 00:41:43.476489 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 00:41:43.478584 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 00:41:43.480319 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 00:41:43.480770 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Apr 14 00:41:43.480925 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Apr 14 00:41:43.484467 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:41:43.484603 systemd-tmpfiles[1391]: Skipping /boot Apr 14 00:41:43.499077 zram_generator::config[1417]: No configuration found. Apr 14 00:41:43.496588 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 00:41:43.498853 systemd-tmpfiles[1391]: Skipping /boot Apr 14 00:41:43.688868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:41:43.809067 systemd[1]: Reloading finished in 371 ms. Apr 14 00:41:43.849969 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 00:41:43.876232 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 00:41:43.910869 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 00:41:43.917296 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 00:41:43.985015 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 00:41:43.993927 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 00:41:44.008088 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:41:44.008323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:41:44.009940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:41:44.019042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:41:44.033059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:41:44.037099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:41:44.037312 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:41:44.040438 augenrules[1485]: No rules Apr 14 00:41:44.047275 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 00:41:44.053016 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 00:41:44.059542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:41:44.059797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:41:44.063604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:41:44.063909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:41:44.071135 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:41:44.071421 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:41:44.078151 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 00:41:44.099932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:41:44.100216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:41:44.111151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:41:44.117319 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:41:44.123043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:41:44.123782 systemd-resolved[1474]: Positive Trust Anchors: Apr 14 00:41:44.123809 systemd-resolved[1474]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 00:41:44.123850 systemd-resolved[1474]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 00:41:44.129923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:41:44.129940 systemd-resolved[1474]: Defaulting to hostname 'linux'. Apr 14 00:41:44.132821 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 00:41:44.138068 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 00:41:44.138306 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:41:44.141673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 00:41:44.148434 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 00:41:44.155500 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:41:44.155957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:41:44.161589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:41:44.162013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:41:44.167939 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:41:44.168419 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:41:44.173725 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 00:41:44.196335 systemd[1]: Reached target network.target - Network. Apr 14 00:41:44.200891 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 00:41:44.205447 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 00:41:44.210020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:41:44.210961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 00:41:44.227537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 00:41:44.234037 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 00:41:44.239792 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 00:41:44.246033 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 00:41:44.249489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 00:41:44.249729 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 00:41:44.249827 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 00:41:44.252411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 00:41:44.252926 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 00:41:44.270316 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 00:41:44.271319 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 00:41:44.277262 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 00:41:44.277612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 00:41:44.282592 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 00:41:44.283117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 00:41:44.289882 systemd[1]: Finished ensure-sysext.service. Apr 14 00:41:44.300278 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 00:41:44.300549 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 00:41:44.312079 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 00:41:44.392383 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 00:41:44.397882 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 00:41:45.414265 systemd-resolved[1474]: Clock change detected. Flushing caches. Apr 14 00:41:45.414425 systemd-timesyncd[1535]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 00:41:45.414470 systemd-timesyncd[1535]: Initial clock synchronization to Tue 2026-04-14 00:41:45.414026 UTC. Apr 14 00:41:45.418714 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 00:41:45.423129 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 00:41:45.429150 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 00:41:45.433697 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 00:41:45.433755 systemd[1]: Reached target paths.target - Path Units. Apr 14 00:41:45.436853 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 00:41:45.493571 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 00:41:45.500898 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 00:41:45.507248 systemd[1]: Reached target timers.target - Timer Units. Apr 14 00:41:45.515583 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 00:41:45.523888 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 00:41:45.530414 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 00:41:45.541760 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 00:41:45.545673 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 00:41:45.549224 systemd[1]: Reached target basic.target - Basic System. Apr 14 00:41:45.552845 systemd[1]: System is tainted: cgroupsv1 Apr 14 00:41:45.553223 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 00:41:45.553268 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 00:41:45.556227 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 00:41:45.562759 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 00:41:45.568755 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 00:41:45.576801 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 00:41:45.583734 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 00:41:45.585914 jq[1543]: false Apr 14 00:41:45.587292 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 00:41:45.590808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:41:45.597403 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 00:41:45.606399 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 00:41:45.609881 extend-filesystems[1545]: Found loop3 Apr 14 00:41:45.612912 extend-filesystems[1545]: Found loop4 Apr 14 00:41:45.612912 extend-filesystems[1545]: Found loop5 Apr 14 00:41:45.612912 extend-filesystems[1545]: Found sr0 Apr 14 00:41:45.612912 extend-filesystems[1545]: Found vda Apr 14 00:41:45.612912 extend-filesystems[1545]: Found vda1 Apr 14 00:41:45.612912 extend-filesystems[1545]: Found vda2 Apr 14 00:41:45.612912 extend-filesystems[1545]: Found vda3 Apr 14 00:41:45.612912 extend-filesystems[1545]: Found usr Apr 14 00:41:45.612912 extend-filesystems[1545]: Found vda4 Apr 14 00:41:45.612912 extend-filesystems[1545]: Found vda6 Apr 14 00:41:45.612912 extend-filesystems[1545]: Found vda7 Apr 14 00:41:45.612912 extend-filesystems[1545]: Found vda9 Apr 14 00:41:45.612912 extend-filesystems[1545]: Checking size of /dev/vda9 Apr 14 00:41:45.703168 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 00:41:45.703202 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1259) Apr 14 00:41:45.611837 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 00:41:45.703332 extend-filesystems[1545]: Resized partition /dev/vda9 Apr 14 00:41:45.667987 dbus-daemon[1541]: [system] SELinux support is enabled Apr 14 00:41:45.622862 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 00:41:45.722793 extend-filesystems[1561]: resize2fs 1.47.1 (20-May-2024) Apr 14 00:41:45.640838 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 00:41:45.652727 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 00:41:45.676763 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 00:41:45.691025 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 00:41:45.701334 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 00:41:45.705173 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 00:41:45.729880 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 00:41:45.744781 jq[1574]: true Apr 14 00:41:45.730384 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 00:41:45.733971 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 00:41:45.734295 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 00:41:45.752389 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 00:41:45.752860 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 00:41:45.783994 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 00:41:45.795279 update_engine[1572]: I20260414 00:41:45.783694 1572 main.cc:92] Flatcar Update Engine starting Apr 14 00:41:45.795279 update_engine[1572]: I20260414 00:41:45.790809 1572 update_check_scheduler.cc:74] Next update check in 7m19s Apr 14 00:41:45.801724 jq[1583]: true Apr 14 00:41:45.810802 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 00:41:45.820422 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 00:41:45.821018 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 00:41:45.833624 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 00:41:45.874815 tar[1582]: linux-amd64/LICENSE Apr 14 00:41:45.863199 systemd[1]: Started update-engine.service - Update Engine. Apr 14 00:41:45.877815 tar[1582]: linux-amd64/helm Apr 14 00:41:45.875696 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 00:41:45.876285 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 00:41:45.876331 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 00:41:45.877295 systemd-logind[1566]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 00:41:45.877463 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 00:41:45.879736 systemd-logind[1566]: New seat seat0. Apr 14 00:41:45.882670 extend-filesystems[1561]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 00:41:45.882670 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 00:41:45.882670 extend-filesystems[1561]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 00:41:45.898443 extend-filesystems[1545]: Resized filesystem in /dev/vda9 Apr 14 00:41:45.884771 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 00:41:45.908306 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Apr 14 00:41:45.884818 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 00:41:45.899761 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 00:41:45.914614 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 00:41:45.924367 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 00:41:45.934113 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 00:41:45.934354 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 00:41:45.939012 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 00:41:45.950008 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 00:41:46.025169 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 00:41:46.079213 sshd_keygen[1571]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 00:41:46.120649 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 00:41:46.188725 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 00:41:46.211790 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 00:41:46.213355 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 00:41:46.227770 containerd[1585]: time="2026-04-14T00:41:46.227687414Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 00:41:46.233273 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 00:41:46.253947 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 00:41:46.266147 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 00:41:46.276261 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 00:41:46.277824 containerd[1585]: time="2026-04-14T00:41:46.277429495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:41:46.282349 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 00:41:46.283898 containerd[1585]: time="2026-04-14T00:41:46.283796809Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.284036250Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.284129888Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.284308823Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.284327449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.284387659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.284400929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.285015017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.285190780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.285235308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.285253908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.285360903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286573 containerd[1585]: time="2026-04-14T00:41:46.285810717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286949 containerd[1585]: time="2026-04-14T00:41:46.286016574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 00:41:46.286949 containerd[1585]: time="2026-04-14T00:41:46.286033178Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 00:41:46.286949 containerd[1585]: time="2026-04-14T00:41:46.286336941Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 00:41:46.287126 containerd[1585]: time="2026-04-14T00:41:46.286489810Z" level=info msg="metadata content store policy set" policy=shared Apr 14 00:41:46.294960 containerd[1585]: time="2026-04-14T00:41:46.294896991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 00:41:46.295871 containerd[1585]: time="2026-04-14T00:41:46.295627229Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 00:41:46.295987 containerd[1585]: time="2026-04-14T00:41:46.295975504Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 00:41:46.296071 containerd[1585]: time="2026-04-14T00:41:46.296031347Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 00:41:46.296174 containerd[1585]: time="2026-04-14T00:41:46.296133256Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 00:41:46.296616 containerd[1585]: time="2026-04-14T00:41:46.296482301Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 00:41:46.297621 containerd[1585]: time="2026-04-14T00:41:46.297571357Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 00:41:46.297951 containerd[1585]: time="2026-04-14T00:41:46.297754991Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 00:41:46.297951 containerd[1585]: time="2026-04-14T00:41:46.297936830Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 00:41:46.297951 containerd[1585]: time="2026-04-14T00:41:46.297961314Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 00:41:46.298196 containerd[1585]: time="2026-04-14T00:41:46.297974660Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 00:41:46.298196 containerd[1585]: time="2026-04-14T00:41:46.297986310Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 00:41:46.298196 containerd[1585]: time="2026-04-14T00:41:46.297998880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 00:41:46.298196 containerd[1585]: time="2026-04-14T00:41:46.298021860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 00:41:46.298196 containerd[1585]: time="2026-04-14T00:41:46.298109681Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 00:41:46.298196 containerd[1585]: time="2026-04-14T00:41:46.298171731Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 00:41:46.298196 containerd[1585]: time="2026-04-14T00:41:46.298183081Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 00:41:46.298196 containerd[1585]: time="2026-04-14T00:41:46.298191400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298214848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298227600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298237470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298249521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298258431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298267749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298275871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298286892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298296341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298309961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298318684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298326312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298352 containerd[1585]: time="2026-04-14T00:41:46.298344584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298659 containerd[1585]: time="2026-04-14T00:41:46.298362068Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 00:41:46.298659 containerd[1585]: time="2026-04-14T00:41:46.298380670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298659 containerd[1585]: time="2026-04-14T00:41:46.298389462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.298659 containerd[1585]: time="2026-04-14T00:41:46.298398310Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 00:41:46.298659 containerd[1585]: time="2026-04-14T00:41:46.298441045Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 00:41:46.298659 containerd[1585]: time="2026-04-14T00:41:46.298456357Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 00:41:46.298659 containerd[1585]: time="2026-04-14T00:41:46.298464977Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 00:41:46.298659 containerd[1585]: time="2026-04-14T00:41:46.298473111Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 00:41:46.298659 containerd[1585]: time="2026-04-14T00:41:46.298479438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.299706 containerd[1585]: time="2026-04-14T00:41:46.298665709Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 00:41:46.299706 containerd[1585]: time="2026-04-14T00:41:46.299698689Z" level=info msg="NRI interface is disabled by configuration." Apr 14 00:41:46.299809 containerd[1585]: time="2026-04-14T00:41:46.299722738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 00:41:46.301098 containerd[1585]: time="2026-04-14T00:41:46.300558192Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 00:41:46.301486 containerd[1585]: time="2026-04-14T00:41:46.301161218Z" level=info msg="Connect containerd service" Apr 14 00:41:46.301653 containerd[1585]: time="2026-04-14T00:41:46.301588494Z" level=info msg="using legacy CRI server" Apr 14 00:41:46.301682 containerd[1585]: time="2026-04-14T00:41:46.301668632Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 00:41:46.302414 containerd[1585]: time="2026-04-14T00:41:46.302023382Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 00:41:46.303986 containerd[1585]: time="2026-04-14T00:41:46.303789063Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 00:41:46.305388 containerd[1585]: time="2026-04-14T00:41:46.304911856Z" level=info msg="Start subscribing containerd event" Apr 14 00:41:46.305388 containerd[1585]: time="2026-04-14T00:41:46.305095955Z" level=info msg="Start recovering state" Apr 14 00:41:46.306007 containerd[1585]: time="2026-04-14T00:41:46.305856201Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 00:41:46.306111 containerd[1585]: time="2026-04-14T00:41:46.305886032Z" level=info msg="Start event monitor" Apr 14 00:41:46.306141 containerd[1585]: time="2026-04-14T00:41:46.306119541Z" level=info msg="Start snapshots syncer" Apr 14 00:41:46.306141 containerd[1585]: time="2026-04-14T00:41:46.306137044Z" level=info msg="Start cni network conf syncer for default" Apr 14 00:41:46.306189 containerd[1585]: time="2026-04-14T00:41:46.306153260Z" level=info msg="Start streaming server" Apr 14 00:41:46.307214 containerd[1585]: time="2026-04-14T00:41:46.306138678Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 00:41:46.307214 containerd[1585]: time="2026-04-14T00:41:46.306321685Z" level=info msg="containerd successfully booted in 0.081017s" Apr 14 00:41:46.307714 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 00:41:46.645615 tar[1582]: linux-amd64/README.md Apr 14 00:41:46.663814 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 00:41:47.097187 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 00:41:47.114256 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:33774.service - OpenSSH per-connection server daemon (10.0.0.1:33774). Apr 14 00:41:47.211358 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 33774 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:41:47.216201 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:41:47.301808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:41:47.302391 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:41:47.311622 systemd-logind[1566]: New session 1 of user core. Apr 14 00:41:47.313363 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 00:41:47.318415 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 00:41:47.324829 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 00:41:47.359023 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 00:41:47.376157 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 00:41:47.392161 (systemd)[1693]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 00:41:47.584360 systemd[1693]: Queued start job for default target default.target. Apr 14 00:41:47.585162 systemd[1693]: Created slice app.slice - User Application Slice. Apr 14 00:41:47.585189 systemd[1693]: Reached target paths.target - Paths. Apr 14 00:41:47.585198 systemd[1693]: Reached target timers.target - Timers. Apr 14 00:41:47.599916 systemd[1693]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 00:41:47.611402 systemd[1693]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 00:41:47.611480 systemd[1693]: Reached target sockets.target - Sockets. Apr 14 00:41:47.611533 systemd[1693]: Reached target basic.target - Basic System. Apr 14 00:41:47.611569 systemd[1693]: Reached target default.target - Main User Target. Apr 14 00:41:47.611589 systemd[1693]: Startup finished in 204ms. Apr 14 00:41:47.611726 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 00:41:47.618862 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 00:41:47.624905 systemd[1]: Startup finished in 7.829s (kernel) + 7.331s (userspace) = 15.161s. Apr 14 00:41:47.689991 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:33790.service - OpenSSH per-connection server daemon (10.0.0.1:33790). Apr 14 00:41:47.743594 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 33790 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:41:47.746720 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:41:47.758269 systemd-logind[1566]: New session 2 of user core. Apr 14 00:41:47.769474 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 00:41:47.901188 sshd[1714]: pam_unix(sshd:session): session closed for user core Apr 14 00:41:47.913277 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:33790.service: Deactivated successfully. Apr 14 00:41:47.916693 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 00:41:47.919470 systemd-logind[1566]: Session 2 logged out. Waiting for processes to exit. Apr 14 00:41:47.930939 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:33800.service - OpenSSH per-connection server daemon (10.0.0.1:33800). Apr 14 00:41:47.932765 systemd-logind[1566]: Removed session 2. Apr 14 00:41:47.975409 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 33800 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:41:47.977852 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:41:47.987736 systemd-logind[1566]: New session 3 of user core. Apr 14 00:41:48.002238 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 00:41:48.061296 sshd[1723]: pam_unix(sshd:session): session closed for user core Apr 14 00:41:48.081199 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:33802.service - OpenSSH per-connection server daemon (10.0.0.1:33802). Apr 14 00:41:48.081970 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:33800.service: Deactivated successfully. Apr 14 00:41:48.087255 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 00:41:48.090881 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Apr 14 00:41:48.094671 systemd-logind[1566]: Removed session 3. Apr 14 00:41:48.125890 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 33802 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:41:48.128477 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:41:48.137731 systemd-logind[1566]: New session 4 of user core. Apr 14 00:41:48.147651 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 00:41:48.214836 sshd[1728]: pam_unix(sshd:session): session closed for user core Apr 14 00:41:48.219457 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:33802.service: Deactivated successfully. Apr 14 00:41:48.224242 kubelet[1687]: E0414 00:41:48.224011 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:41:48.224811 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 00:41:48.226940 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Apr 14 00:41:48.235452 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:33810.service - OpenSSH per-connection server daemon (10.0.0.1:33810). Apr 14 00:41:48.235815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:41:48.235895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:41:48.237630 systemd-logind[1566]: Removed session 4. Apr 14 00:41:48.281968 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 33810 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:41:48.284366 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:41:48.292638 systemd-logind[1566]: New session 5 of user core. Apr 14 00:41:48.307262 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 00:41:48.372113 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 14 00:41:48.372336 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:41:48.391047 sudo[1746]: pam_unix(sudo:session): session closed for user root Apr 14 00:41:48.394094 sshd[1741]: pam_unix(sshd:session): session closed for user core Apr 14 00:41:48.413293 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:33816.service - OpenSSH per-connection server daemon (10.0.0.1:33816). Apr 14 00:41:48.414160 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:33810.service: Deactivated successfully. Apr 14 00:41:48.417868 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Apr 14 00:41:48.419036 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 00:41:48.421087 systemd-logind[1566]: Removed session 5. Apr 14 00:41:48.507119 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 33816 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:41:48.510849 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:41:48.518426 systemd-logind[1566]: New session 6 of user core. Apr 14 00:41:48.528085 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 00:41:48.583873 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 14 00:41:48.584253 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:41:48.589451 sudo[1756]: pam_unix(sudo:session): session closed for user root Apr 14 00:41:48.595638 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 14 00:41:48.595856 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:41:48.627301 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 14 00:41:48.629571 auditctl[1759]: No rules Apr 14 00:41:48.630447 systemd[1]: audit-rules.service: Deactivated successfully. Apr 14 00:41:48.630865 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 14 00:41:48.633793 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 00:41:48.705485 augenrules[1778]: No rules Apr 14 00:41:48.707252 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 00:41:48.709392 sudo[1755]: pam_unix(sudo:session): session closed for user root Apr 14 00:41:48.712985 sshd[1748]: pam_unix(sshd:session): session closed for user core Apr 14 00:41:48.722238 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:33824.service - OpenSSH per-connection server daemon (10.0.0.1:33824). Apr 14 00:41:48.722962 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:33816.service: Deactivated successfully. Apr 14 00:41:48.725736 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 00:41:48.727229 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Apr 14 00:41:48.729703 systemd-logind[1566]: Removed session 6. Apr 14 00:41:48.763657 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 33824 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:41:48.766318 sshd[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:41:48.776923 systemd-logind[1566]: New session 7 of user core. Apr 14 00:41:48.790190 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 00:41:48.856865 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 00:41:48.857247 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 00:41:49.391315 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 00:41:49.391829 (dockerd)[1810]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 00:41:49.909689 dockerd[1810]: time="2026-04-14T00:41:49.909178144Z" level=info msg="Starting up" Apr 14 00:41:50.217616 dockerd[1810]: time="2026-04-14T00:41:50.217154941Z" level=info msg="Loading containers: start." Apr 14 00:41:50.427693 kernel: Initializing XFRM netlink socket Apr 14 00:41:50.558327 systemd-networkd[1257]: docker0: Link UP Apr 14 00:41:50.596410 dockerd[1810]: time="2026-04-14T00:41:50.596291093Z" level=info msg="Loading containers: done." Apr 14 00:41:50.624040 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2302382190-merged.mount: Deactivated successfully. Apr 14 00:41:50.626778 dockerd[1810]: time="2026-04-14T00:41:50.626661060Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 00:41:50.626998 dockerd[1810]: time="2026-04-14T00:41:50.626966938Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 00:41:50.627179 dockerd[1810]: time="2026-04-14T00:41:50.627147697Z" level=info msg="Daemon has completed initialization" Apr 14 00:41:50.686338 dockerd[1810]: time="2026-04-14T00:41:50.686247706Z" level=info msg="API listen on /run/docker.sock" Apr 14 00:41:50.686490 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 00:41:51.632946 containerd[1585]: time="2026-04-14T00:41:51.632900932Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 14 00:41:52.362127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935810368.mount: Deactivated successfully. Apr 14 00:41:54.128185 containerd[1585]: time="2026-04-14T00:41:54.127923314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:54.131537 containerd[1585]: time="2026-04-14T00:41:54.131384717Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 14 00:41:54.133967 containerd[1585]: time="2026-04-14T00:41:54.133865227Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:54.142282 containerd[1585]: time="2026-04-14T00:41:54.142141725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:54.145676 containerd[1585]: time="2026-04-14T00:41:54.145230624Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 2.512276724s" Apr 14 00:41:54.145676 containerd[1585]: time="2026-04-14T00:41:54.145321182Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 14 00:41:54.147383 containerd[1585]: time="2026-04-14T00:41:54.147281441Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 14 00:41:55.656099 containerd[1585]: time="2026-04-14T00:41:55.655760757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:55.656980 containerd[1585]: time="2026-04-14T00:41:55.656773506Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 14 00:41:55.660737 containerd[1585]: time="2026-04-14T00:41:55.660616844Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:55.670754 containerd[1585]: time="2026-04-14T00:41:55.670270044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:55.674581 containerd[1585]: time="2026-04-14T00:41:55.674377923Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 1.527034819s" Apr 14 00:41:55.674581 containerd[1585]: time="2026-04-14T00:41:55.674546809Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 14 00:41:55.676637 containerd[1585]: time="2026-04-14T00:41:55.676363369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 14 00:41:56.723574 containerd[1585]: time="2026-04-14T00:41:56.723426467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:56.728201 containerd[1585]: time="2026-04-14T00:41:56.727872353Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 14 00:41:56.732187 containerd[1585]: time="2026-04-14T00:41:56.731974941Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:56.745805 containerd[1585]: time="2026-04-14T00:41:56.745627034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:56.748226 containerd[1585]: time="2026-04-14T00:41:56.747914857Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 1.071275939s" Apr 14 00:41:56.748226 containerd[1585]: time="2026-04-14T00:41:56.747999258Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 14 00:41:56.748861 containerd[1585]: time="2026-04-14T00:41:56.748770747Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 14 00:41:57.946191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653182591.mount: Deactivated successfully. Apr 14 00:41:58.431982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 00:41:58.441963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:41:58.621416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:41:58.641282 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 00:41:58.661776 containerd[1585]: time="2026-04-14T00:41:58.660286207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:58.665929 containerd[1585]: time="2026-04-14T00:41:58.665798774Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 14 00:41:58.667460 containerd[1585]: time="2026-04-14T00:41:58.667389838Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:58.683429 containerd[1585]: time="2026-04-14T00:41:58.682813469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:41:58.686704 containerd[1585]: time="2026-04-14T00:41:58.686413305Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 1.937378928s" Apr 14 00:41:58.686981 containerd[1585]: time="2026-04-14T00:41:58.686802860Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 14 00:41:58.689457 containerd[1585]: time="2026-04-14T00:41:58.689360837Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 14 00:41:58.816623 kubelet[2044]: E0414 00:41:58.815697 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 00:41:58.822952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 00:41:58.823374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 00:41:59.337432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784648588.mount: Deactivated successfully. Apr 14 00:42:00.820877 containerd[1585]: time="2026-04-14T00:42:00.820677305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:00.822219 containerd[1585]: time="2026-04-14T00:42:00.821898388Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 14 00:42:00.823378 containerd[1585]: time="2026-04-14T00:42:00.823268011Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:00.828592 containerd[1585]: time="2026-04-14T00:42:00.828482231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:00.830266 containerd[1585]: time="2026-04-14T00:42:00.829773587Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.140326942s" Apr 14 00:42:00.830266 containerd[1585]: time="2026-04-14T00:42:00.829849325Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 14 00:42:00.831792 containerd[1585]: time="2026-04-14T00:42:00.831728814Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 14 00:42:01.293663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384712853.mount: Deactivated successfully. Apr 14 00:42:01.307723 containerd[1585]: time="2026-04-14T00:42:01.306169753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:01.312575 containerd[1585]: time="2026-04-14T00:42:01.312241599Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 14 00:42:01.324330 containerd[1585]: time="2026-04-14T00:42:01.321738906Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:01.334224 containerd[1585]: time="2026-04-14T00:42:01.333963999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:01.337736 containerd[1585]: time="2026-04-14T00:42:01.337557674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 505.752206ms" Apr 14 00:42:01.337736 containerd[1585]: time="2026-04-14T00:42:01.337624557Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 14 00:42:01.338943 containerd[1585]: time="2026-04-14T00:42:01.338838112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 14 00:42:02.008401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068806617.mount: Deactivated successfully. Apr 14 00:42:03.802937 containerd[1585]: time="2026-04-14T00:42:03.802488677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:03.804064 containerd[1585]: time="2026-04-14T00:42:03.803884214Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 14 00:42:03.806361 containerd[1585]: time="2026-04-14T00:42:03.806232671Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:03.816014 containerd[1585]: time="2026-04-14T00:42:03.815750233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:03.819546 containerd[1585]: time="2026-04-14T00:42:03.819239119Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.480322377s" Apr 14 00:42:03.819731 containerd[1585]: time="2026-04-14T00:42:03.819453400Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 14 00:42:08.841096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 14 00:42:08.858371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:42:08.877649 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 14 00:42:08.877905 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 14 00:42:08.878402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:42:08.883438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:42:08.930285 systemd[1]: Reloading requested from client PID 2211 ('systemctl') (unit session-7.scope)... Apr 14 00:42:08.930334 systemd[1]: Reloading... Apr 14 00:42:09.077546 zram_generator::config[2250]: No configuration found. Apr 14 00:42:09.203685 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:42:09.280373 systemd[1]: Reloading finished in 349 ms. Apr 14 00:42:09.327935 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 14 00:42:09.328169 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 14 00:42:09.328421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:42:09.331698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:42:09.476018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:42:09.481201 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:42:09.538547 kubelet[2310]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:42:09.538547 kubelet[2310]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 00:42:09.538547 kubelet[2310]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:42:09.538547 kubelet[2310]: I0414 00:42:09.537724 2310 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 00:42:09.666110 kubelet[2310]: I0414 00:42:09.665949 2310 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 00:42:09.666110 kubelet[2310]: I0414 00:42:09.666105 2310 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:42:09.667038 kubelet[2310]: I0414 00:42:09.666956 2310 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 00:42:09.687623 kubelet[2310]: E0414 00:42:09.687488 2310 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 00:42:09.692338 kubelet[2310]: I0414 00:42:09.692171 2310 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:42:09.704590 kubelet[2310]: E0414 00:42:09.704524 2310 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:42:09.704590 kubelet[2310]: I0414 00:42:09.704565 2310 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 00:42:09.710443 kubelet[2310]: I0414 00:42:09.710346 2310 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 00:42:09.711892 kubelet[2310]: I0414 00:42:09.711632 2310 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:42:09.712291 kubelet[2310]: I0414 00:42:09.711883 2310 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 14 00:42:09.712641 kubelet[2310]: I0414 00:42:09.712370 2310 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 00:42:09.712641 kubelet[2310]: I0414 00:42:09.712392 2310 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 00:42:09.712732 kubelet[2310]: I0414 00:42:09.712698 2310 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:42:09.716911 kubelet[2310]: I0414 00:42:09.716848 2310 kubelet.go:480] "Attempting to sync node with API server" Apr 14 00:42:09.716911 kubelet[2310]: I0414 00:42:09.716890 2310 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:42:09.716911 kubelet[2310]: I0414 00:42:09.716918 2310 kubelet.go:386] "Adding apiserver pod source" Apr 14 00:42:09.717073 kubelet[2310]: I0414 00:42:09.716940 2310 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:42:09.720427 kubelet[2310]: I0414 00:42:09.719972 2310 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:42:09.720617 kubelet[2310]: I0414 00:42:09.720591 2310 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:42:09.721593 kubelet[2310]: W0414 00:42:09.721388 2310 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 00:42:09.723000 kubelet[2310]: E0414 00:42:09.722952 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:42:09.725592 kubelet[2310]: E0414 00:42:09.723248 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:42:09.728482 kubelet[2310]: I0414 00:42:09.728378 2310 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 00:42:09.728482 kubelet[2310]: I0414 00:42:09.728447 2310 server.go:1289] "Started kubelet" Apr 14 00:42:09.728707 kubelet[2310]: I0414 00:42:09.728673 2310 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:42:09.731583 kubelet[2310]: I0414 00:42:09.731409 2310 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:42:09.731944 kubelet[2310]: I0414 00:42:09.731898 2310 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:42:09.732324 kubelet[2310]: I0414 00:42:09.732287 2310 server.go:317] "Adding debug handlers to kubelet server" Apr 14 00:42:09.734829 kubelet[2310]: E0414 00:42:09.732664 2310 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6126564532638 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 00:42:09.728407096 +0000 UTC m=+0.242710873,LastTimestamp:2026-04-14 00:42:09.728407096 +0000 UTC m=+0.242710873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 00:42:09.735592 kubelet[2310]: I0414 00:42:09.735201 2310 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:42:09.735946 kubelet[2310]: I0414 00:42:09.735284 2310 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 00:42:09.737256 kubelet[2310]: I0414 00:42:09.737183 2310 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 00:42:09.738390 kubelet[2310]: E0414 00:42:09.738330 2310 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:42:09.739415 kubelet[2310]: I0414 00:42:09.738307 2310 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 00:42:09.739415 kubelet[2310]: E0414 00:42:09.739136 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Apr 14 00:42:09.739415 kubelet[2310]: I0414 00:42:09.739229 2310 reconciler.go:26] "Reconciler: start to sync state" Apr 14 00:42:09.739922 kubelet[2310]: E0414 00:42:09.739874 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 00:42:09.740223 kubelet[2310]: I0414 00:42:09.740115 2310 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:42:09.742431 kubelet[2310]: E0414 00:42:09.741076 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:42:09.742431 kubelet[2310]: I0414 00:42:09.741325 2310 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:42:09.742431 kubelet[2310]: I0414 00:42:09.741334 2310 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:42:09.766407 kubelet[2310]: I0414 00:42:09.766373 2310 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 00:42:09.766407 kubelet[2310]: I0414 00:42:09.766401 2310 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 00:42:09.766566 kubelet[2310]: I0414 00:42:09.766421 2310 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:42:09.770438 kubelet[2310]: I0414 00:42:09.770371 2310 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 00:42:09.772724 kubelet[2310]: I0414 00:42:09.772459 2310 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 00:42:09.772724 kubelet[2310]: I0414 00:42:09.772486 2310 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 00:42:09.772724 kubelet[2310]: I0414 00:42:09.772573 2310 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:42:09.772724 kubelet[2310]: I0414 00:42:09.772582 2310 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 00:42:09.772724 kubelet[2310]: E0414 00:42:09.772619 2310 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:42:09.820461 kubelet[2310]: I0414 00:42:09.820058 2310 policy_none.go:49] "None policy: Start" Apr 14 00:42:09.821666 kubelet[2310]: I0414 00:42:09.820605 2310 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 00:42:09.821666 kubelet[2310]: I0414 00:42:09.820659 2310 state_mem.go:35] "Initializing new in-memory state store" Apr 14 00:42:09.822112 kubelet[2310]: E0414 00:42:09.822004 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 00:42:09.828637 kubelet[2310]: E0414 00:42:09.828543 2310 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:42:09.829134 kubelet[2310]: I0414 00:42:09.829074 2310 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 00:42:09.829134 kubelet[2310]: I0414 00:42:09.829115 2310 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:42:09.830589 kubelet[2310]: I0414 00:42:09.830405 2310 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 00:42:09.832421 kubelet[2310]: E0414 00:42:09.832364 2310 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:42:09.832421 kubelet[2310]: E0414 00:42:09.832410 2310 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 00:42:09.906857 kubelet[2310]: E0414 00:42:09.906685 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:42:09.913578 kubelet[2310]: E0414 00:42:09.912681 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:42:09.916987 kubelet[2310]: E0414 00:42:09.916909 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:42:09.933678 kubelet[2310]: I0414 00:42:09.933591 2310 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:42:09.933975 kubelet[2310]: E0414 00:42:09.933926 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 14 00:42:09.939648 kubelet[2310]: I0414 00:42:09.939564 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:09.939648 kubelet[2310]: I0414 00:42:09.939609 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:09.939648 kubelet[2310]: E0414 00:42:09.939638 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Apr 14 00:42:10.040966 kubelet[2310]: I0414 00:42:10.040787 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:10.040966 kubelet[2310]: I0414 00:42:10.040846 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:42:10.040966 kubelet[2310]: I0414 00:42:10.040920 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7f430a71c3c62aabf49307a3e42eed8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7f430a71c3c62aabf49307a3e42eed8\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:42:10.040966 kubelet[2310]: I0414 00:42:10.040948 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7f430a71c3c62aabf49307a3e42eed8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d7f430a71c3c62aabf49307a3e42eed8\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:42:10.041183 kubelet[2310]: I0414 00:42:10.040964 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:10.041183 kubelet[2310]: I0414 00:42:10.041006 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:10.041183 kubelet[2310]: I0414 00:42:10.041021 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7f430a71c3c62aabf49307a3e42eed8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7f430a71c3c62aabf49307a3e42eed8\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:42:10.137544 kubelet[2310]: I0414 00:42:10.137413 2310 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:42:10.138159 kubelet[2310]: E0414 00:42:10.138030 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 14 00:42:10.209858 kubelet[2310]: E0414 00:42:10.209617 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:10.211930 containerd[1585]: time="2026-04-14T00:42:10.211854458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 14 00:42:10.214348 kubelet[2310]: E0414 00:42:10.214259 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:10.215335 containerd[1585]: time="2026-04-14T00:42:10.215272148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 14 00:42:10.217967 kubelet[2310]: E0414 00:42:10.217899 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:10.218598 containerd[1585]: time="2026-04-14T00:42:10.218449140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d7f430a71c3c62aabf49307a3e42eed8,Namespace:kube-system,Attempt:0,}" Apr 14 00:42:10.341414 kubelet[2310]: E0414 00:42:10.341272 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Apr 14 00:42:10.541845 kubelet[2310]: I0414 00:42:10.541765 2310 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:42:10.542296 kubelet[2310]: E0414 00:42:10.542212 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Apr 14 00:42:10.644373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3972202655.mount: Deactivated successfully. Apr 14 00:42:10.652826 containerd[1585]: time="2026-04-14T00:42:10.652754604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:42:10.653749 containerd[1585]: time="2026-04-14T00:42:10.653688906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 00:42:10.656483 containerd[1585]: time="2026-04-14T00:42:10.656386963Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:42:10.657371 containerd[1585]: time="2026-04-14T00:42:10.657329621Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:42:10.658527 containerd[1585]: time="2026-04-14T00:42:10.658384666Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:42:10.659432 containerd[1585]: time="2026-04-14T00:42:10.659383518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:42:10.660253 containerd[1585]: time="2026-04-14T00:42:10.660119979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 00:42:10.661218 containerd[1585]: time="2026-04-14T00:42:10.661050792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 00:42:10.662987 containerd[1585]: time="2026-04-14T00:42:10.662912497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 450.971144ms" Apr 14 00:42:10.664219 containerd[1585]: time="2026-04-14T00:42:10.664121178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 448.759049ms" Apr 14 00:42:10.666313 containerd[1585]: time="2026-04-14T00:42:10.666256774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 447.566319ms" Apr 14 00:42:10.781813 containerd[1585]: time="2026-04-14T00:42:10.781647887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:42:10.781813 containerd[1585]: time="2026-04-14T00:42:10.781721870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:42:10.781813 containerd[1585]: time="2026-04-14T00:42:10.781734712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:10.783367 containerd[1585]: time="2026-04-14T00:42:10.782191094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:10.783367 containerd[1585]: time="2026-04-14T00:42:10.782416217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:42:10.783367 containerd[1585]: time="2026-04-14T00:42:10.782575042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:42:10.783367 containerd[1585]: time="2026-04-14T00:42:10.782595141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:10.783367 containerd[1585]: time="2026-04-14T00:42:10.782897398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:10.787287 containerd[1585]: time="2026-04-14T00:42:10.786909638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:42:10.787287 containerd[1585]: time="2026-04-14T00:42:10.786947796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:42:10.787287 containerd[1585]: time="2026-04-14T00:42:10.786959466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:10.787443 containerd[1585]: time="2026-04-14T00:42:10.787312120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:10.858710 containerd[1585]: time="2026-04-14T00:42:10.858644334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d7f430a71c3c62aabf49307a3e42eed8,Namespace:kube-system,Attempt:0,} returns sandbox id \"acf46075e24efccb17db0fc99c2da0c0459074e1afd63c067edbe4c025b09e30\"" Apr 14 00:42:10.861605 kubelet[2310]: E0414 00:42:10.861372 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:10.868119 containerd[1585]: time="2026-04-14T00:42:10.867969620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7f62a3add1878390ff17204de423349d8ff9a6df5f39052b5791326d32e5354\"" Apr 14 00:42:10.869099 kubelet[2310]: E0414 00:42:10.869012 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:10.869235 containerd[1585]: time="2026-04-14T00:42:10.868944263Z" level=info msg="CreateContainer within sandbox \"acf46075e24efccb17db0fc99c2da0c0459074e1afd63c067edbe4c025b09e30\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 00:42:10.869436 containerd[1585]: time="2026-04-14T00:42:10.869356519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fa190bc3a44487ca56d2c5a4fcf9c7abac88e3ed12b2e84f1f026b99deccb79\"" Apr 14 00:42:10.870587 kubelet[2310]: E0414 00:42:10.870463 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:10.873802 containerd[1585]: time="2026-04-14T00:42:10.873781298Z" level=info msg="CreateContainer within sandbox \"c7f62a3add1878390ff17204de423349d8ff9a6df5f39052b5791326d32e5354\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 00:42:10.876048 containerd[1585]: time="2026-04-14T00:42:10.876005562Z" level=info msg="CreateContainer within sandbox \"7fa190bc3a44487ca56d2c5a4fcf9c7abac88e3ed12b2e84f1f026b99deccb79\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 00:42:10.895810 containerd[1585]: time="2026-04-14T00:42:10.895633050Z" level=info msg="CreateContainer within sandbox \"acf46075e24efccb17db0fc99c2da0c0459074e1afd63c067edbe4c025b09e30\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"796f21f06fa83359d11da6bc33f1c282775cce603a8173cb9298278d517e553b\"" Apr 14 00:42:10.896855 containerd[1585]: time="2026-04-14T00:42:10.896791312Z" level=info msg="StartContainer for \"796f21f06fa83359d11da6bc33f1c282775cce603a8173cb9298278d517e553b\"" Apr 14 00:42:10.900395 containerd[1585]: time="2026-04-14T00:42:10.900317135Z" level=info msg="CreateContainer within sandbox \"7fa190bc3a44487ca56d2c5a4fcf9c7abac88e3ed12b2e84f1f026b99deccb79\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"96136a37abe76d732b3662d5d6de81cd7542c0ba5d21e51e7ecb2eb7d8e8c9f8\"" Apr 14 00:42:10.900940 containerd[1585]: time="2026-04-14T00:42:10.900882067Z" level=info msg="StartContainer for \"96136a37abe76d732b3662d5d6de81cd7542c0ba5d21e51e7ecb2eb7d8e8c9f8\"" Apr 14 00:42:10.901202 kubelet[2310]: E0414 00:42:10.901108 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 00:42:10.902425 containerd[1585]: time="2026-04-14T00:42:10.902323783Z" level=info msg="CreateContainer within sandbox \"c7f62a3add1878390ff17204de423349d8ff9a6df5f39052b5791326d32e5354\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c5b1d3f1a70ae2639ecd2a701fc8d40c1e27cc50470474acd9c4aa9e08f27983\"" Apr 14 00:42:10.903654 containerd[1585]: time="2026-04-14T00:42:10.903560741Z" level=info msg="StartContainer for \"c5b1d3f1a70ae2639ecd2a701fc8d40c1e27cc50470474acd9c4aa9e08f27983\"" Apr 14 00:42:10.932403 kubelet[2310]: E0414 00:42:10.932263 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 00:42:11.008646 containerd[1585]: time="2026-04-14T00:42:11.005849054Z" level=info msg="StartContainer for \"796f21f06fa83359d11da6bc33f1c282775cce603a8173cb9298278d517e553b\" returns successfully" Apr 14 00:42:11.013401 containerd[1585]: time="2026-04-14T00:42:11.012762373Z" level=info msg="StartContainer for \"c5b1d3f1a70ae2639ecd2a701fc8d40c1e27cc50470474acd9c4aa9e08f27983\" returns successfully" Apr 14 00:42:11.037570 containerd[1585]: time="2026-04-14T00:42:11.036644555Z" level=info msg="StartContainer for \"96136a37abe76d732b3662d5d6de81cd7542c0ba5d21e51e7ecb2eb7d8e8c9f8\" returns successfully" Apr 14 00:42:11.347539 kubelet[2310]: I0414 00:42:11.346902 2310 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:42:11.793319 kubelet[2310]: E0414 00:42:11.793136 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:42:11.798274 kubelet[2310]: E0414 00:42:11.795375 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:11.807904 kubelet[2310]: E0414 00:42:11.807740 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:42:11.808412 kubelet[2310]: E0414 00:42:11.808140 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:11.831895 kubelet[2310]: E0414 00:42:11.831830 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:42:11.832117 kubelet[2310]: E0414 00:42:11.832078 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:12.831085 kubelet[2310]: E0414 00:42:12.831012 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:42:12.832750 kubelet[2310]: E0414 00:42:12.831248 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:12.836130 kubelet[2310]: E0414 00:42:12.835982 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:42:12.836341 kubelet[2310]: E0414 00:42:12.836321 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:12.881129 kubelet[2310]: E0414 00:42:12.880861 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:42:12.881372 kubelet[2310]: E0414 00:42:12.881230 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:13.075565 kubelet[2310]: E0414 00:42:13.072758 2310 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 00:42:13.260691 kubelet[2310]: I0414 00:42:13.258623 2310 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 00:42:13.260691 kubelet[2310]: E0414 00:42:13.258676 2310 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 14 00:42:13.280003 kubelet[2310]: E0414 00:42:13.279919 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:42:13.395128 kubelet[2310]: E0414 00:42:13.395029 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:42:13.496352 kubelet[2310]: E0414 00:42:13.496062 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:42:13.597698 kubelet[2310]: E0414 00:42:13.596844 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:42:13.698417 kubelet[2310]: E0414 00:42:13.698084 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:42:13.799093 kubelet[2310]: E0414 00:42:13.798874 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:42:13.835094 kubelet[2310]: E0414 00:42:13.835034 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 00:42:13.835970 kubelet[2310]: E0414 00:42:13.835730 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:13.907694 kubelet[2310]: E0414 00:42:13.901231 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:42:14.001805 kubelet[2310]: E0414 00:42:14.001662 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 00:42:14.039862 kubelet[2310]: I0414 00:42:14.038890 2310 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:42:14.060243 kubelet[2310]: I0414 00:42:14.059874 2310 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:14.070834 kubelet[2310]: I0414 00:42:14.070083 2310 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:42:14.725351 kubelet[2310]: I0414 00:42:14.725243 2310 apiserver.go:52] "Watching apiserver" Apr 14 00:42:14.740417 kubelet[2310]: I0414 00:42:14.740346 2310 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 00:42:14.744583 kubelet[2310]: E0414 00:42:14.743845 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:14.748610 kubelet[2310]: E0414 00:42:14.747790 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:14.838916 kubelet[2310]: E0414 00:42:14.838754 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:16.474123 systemd[1]: Reloading requested from client PID 2601 ('systemctl') (unit session-7.scope)... Apr 14 00:42:16.474233 systemd[1]: Reloading... Apr 14 00:42:16.619009 zram_generator::config[2640]: No configuration found. Apr 14 00:42:16.922920 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 00:42:17.035158 systemd[1]: Reloading finished in 560 ms. Apr 14 00:42:17.091562 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:42:17.120905 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 00:42:17.121385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:42:17.199735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 00:42:17.448437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 00:42:17.463745 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 00:42:17.630993 kubelet[2695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:42:17.630993 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 00:42:17.630993 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 00:42:17.630993 kubelet[2695]: I0414 00:42:17.630898 2695 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 00:42:17.651385 kubelet[2695]: I0414 00:42:17.651234 2695 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 00:42:17.655120 kubelet[2695]: I0414 00:42:17.651485 2695 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 00:42:17.655120 kubelet[2695]: I0414 00:42:17.651796 2695 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 00:42:17.655120 kubelet[2695]: I0414 00:42:17.653945 2695 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 00:42:17.659364 kubelet[2695]: I0414 00:42:17.659307 2695 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 00:42:17.672941 kubelet[2695]: E0414 00:42:17.672807 2695 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 00:42:17.672941 kubelet[2695]: I0414 00:42:17.672916 2695 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 00:42:17.683930 kubelet[2695]: I0414 00:42:17.683874 2695 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 00:42:17.685301 kubelet[2695]: I0414 00:42:17.684910 2695 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 00:42:17.685301 kubelet[2695]: I0414 00:42:17.684967 2695 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 14 00:42:17.685301 kubelet[2695]: I0414 00:42:17.685127 2695 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 00:42:17.685301 kubelet[2695]: I0414 00:42:17.685140 2695 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 00:42:17.685929 kubelet[2695]: I0414 00:42:17.685354 2695 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:42:17.685960 kubelet[2695]: I0414 00:42:17.685940 2695 kubelet.go:480] "Attempting to sync node with API server" Apr 14 00:42:17.685960 kubelet[2695]: I0414 00:42:17.685956 2695 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 00:42:17.688807 kubelet[2695]: I0414 00:42:17.685983 2695 kubelet.go:386] "Adding apiserver pod source" Apr 14 00:42:17.688807 kubelet[2695]: I0414 00:42:17.688661 2695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 00:42:17.693823 kubelet[2695]: I0414 00:42:17.693795 2695 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 00:42:17.695000 kubelet[2695]: I0414 00:42:17.694886 2695 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 00:42:17.701455 kubelet[2695]: I0414 00:42:17.701303 2695 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 00:42:17.701638 kubelet[2695]: I0414 00:42:17.701566 2695 server.go:1289] "Started kubelet" Apr 14 00:42:17.711142 kubelet[2695]: I0414 00:42:17.710717 2695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 00:42:17.718768 kubelet[2695]: I0414 00:42:17.717865 2695 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 00:42:17.726796 kubelet[2695]: I0414 00:42:17.726148 2695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 00:42:17.731131 kubelet[2695]: I0414 00:42:17.730926 2695 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 00:42:17.743222 kubelet[2695]: I0414 00:42:17.743017 2695 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 00:42:17.747773 kubelet[2695]: I0414 00:42:17.747018 2695 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 00:42:17.748788 kubelet[2695]: I0414 00:42:17.747913 2695 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 00:42:17.749019 kubelet[2695]: I0414 00:42:17.748878 2695 reconciler.go:26] "Reconciler: start to sync state" Apr 14 00:42:17.755239 kubelet[2695]: I0414 00:42:17.754656 2695 server.go:317] "Adding debug handlers to kubelet server" Apr 14 00:42:17.770657 kubelet[2695]: E0414 00:42:17.770300 2695 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 00:42:17.780577 kubelet[2695]: I0414 00:42:17.780409 2695 factory.go:223] Registration of the containerd container factory successfully Apr 14 00:42:17.780577 kubelet[2695]: I0414 00:42:17.780447 2695 factory.go:223] Registration of the systemd container factory successfully Apr 14 00:42:17.780577 kubelet[2695]: I0414 00:42:17.780570 2695 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 00:42:17.796042 kubelet[2695]: I0414 00:42:17.795894 2695 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 00:42:17.809832 kubelet[2695]: I0414 00:42:17.809756 2695 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 00:42:17.809832 kubelet[2695]: I0414 00:42:17.809793 2695 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 00:42:17.809832 kubelet[2695]: I0414 00:42:17.809822 2695 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 00:42:17.809832 kubelet[2695]: I0414 00:42:17.809829 2695 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 00:42:17.810301 kubelet[2695]: E0414 00:42:17.809935 2695 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 00:42:17.914654 kubelet[2695]: E0414 00:42:17.910619 2695 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 00:42:18.011273 kubelet[2695]: I0414 00:42:18.011079 2695 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 00:42:18.011955 kubelet[2695]: I0414 00:42:18.011427 2695 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 00:42:18.011955 kubelet[2695]: I0414 00:42:18.011452 2695 state_mem.go:36] "Initialized new in-memory state store" Apr 14 00:42:18.012166 kubelet[2695]: I0414 00:42:18.012145 2695 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 00:42:18.012317 kubelet[2695]: I0414 00:42:18.012249 2695 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 00:42:18.012390 kubelet[2695]: I0414 00:42:18.012386 2695 policy_none.go:49] "None policy: Start" Apr 14 00:42:18.012436 kubelet[2695]: I0414 00:42:18.012432 2695 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 00:42:18.012609 kubelet[2695]: I0414 00:42:18.012573 2695 state_mem.go:35] "Initializing new in-memory state store" Apr 14 00:42:18.012809 kubelet[2695]: I0414 00:42:18.012803 2695 state_mem.go:75] "Updated machine memory state" Apr 14 00:42:18.014852 kubelet[2695]: E0414 00:42:18.014807 2695 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 00:42:18.015031 kubelet[2695]: I0414 00:42:18.014988 2695 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 00:42:18.015051 kubelet[2695]: I0414 00:42:18.015023 2695 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 00:42:18.017169 kubelet[2695]: I0414 00:42:18.015651 2695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 00:42:18.021419 kubelet[2695]: E0414 00:42:18.019135 2695 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 00:42:18.114150 kubelet[2695]: I0414 00:42:18.114055 2695 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 00:42:18.114351 kubelet[2695]: I0414 00:42:18.114233 2695 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:18.115407 kubelet[2695]: I0414 00:42:18.115012 2695 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 00:42:18.142014 kubelet[2695]: E0414 00:42:18.139302 2695 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 00:42:18.143624 kubelet[2695]: I0414 00:42:18.140694 2695 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 00:42:18.144100 kubelet[2695]: E0414 00:42:18.143057 2695 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:18.144888 kubelet[2695]: E0414 00:42:18.143097 2695 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 14 00:42:18.165023 kubelet[2695]: I0414 00:42:18.164906 2695 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 00:42:18.165416 kubelet[2695]: I0414 00:42:18.165087 2695 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 00:42:18.200476 kubelet[2695]: I0414 00:42:18.199863 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 00:42:18.200476 kubelet[2695]: I0414 00:42:18.200028 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7f430a71c3c62aabf49307a3e42eed8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7f430a71c3c62aabf49307a3e42eed8\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:42:18.200476 kubelet[2695]: I0414 00:42:18.200063 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:18.200476 kubelet[2695]: I0414 00:42:18.200083 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7f430a71c3c62aabf49307a3e42eed8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7f430a71c3c62aabf49307a3e42eed8\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:42:18.200476 kubelet[2695]: I0414 00:42:18.200106 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7f430a71c3c62aabf49307a3e42eed8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d7f430a71c3c62aabf49307a3e42eed8\") " pod="kube-system/kube-apiserver-localhost" Apr 14 00:42:18.200951 kubelet[2695]: I0414 00:42:18.200129 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:18.200951 kubelet[2695]: I0414 00:42:18.200147 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:18.200951 kubelet[2695]: I0414 00:42:18.200165 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:18.200951 kubelet[2695]: I0414 00:42:18.200747 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 00:42:18.483059 kubelet[2695]: E0414 00:42:18.482956 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:18.484435 kubelet[2695]: E0414 00:42:18.484058 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:18.485280 kubelet[2695]: E0414 00:42:18.484863 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:18.690466 kubelet[2695]: I0414 00:42:18.690293 2695 apiserver.go:52] "Watching apiserver" Apr 14 00:42:18.749436 kubelet[2695]: I0414 00:42:18.749240 2695 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 00:42:18.865468 kubelet[2695]: E0414 00:42:18.863298 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:18.865468 kubelet[2695]: E0414 00:42:18.863425 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:18.865468 kubelet[2695]: E0414 00:42:18.863429 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:18.940026 kubelet[2695]: I0414 00:42:18.939849 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.93982961 podStartE2EDuration="4.93982961s" podCreationTimestamp="2026-04-14 00:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:42:18.939062222 +0000 UTC m=+1.466064543" watchObservedRunningTime="2026-04-14 00:42:18.93982961 +0000 UTC m=+1.466831939" Apr 14 00:42:19.031452 kubelet[2695]: I0414 00:42:19.029018 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.028988464 podStartE2EDuration="5.028988464s" podCreationTimestamp="2026-04-14 00:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:42:19.006144534 +0000 UTC m=+1.533146858" watchObservedRunningTime="2026-04-14 00:42:19.028988464 +0000 UTC m=+1.555990773" Apr 14 00:42:19.050852 kubelet[2695]: I0414 00:42:19.050310 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.050286358 podStartE2EDuration="5.050286358s" podCreationTimestamp="2026-04-14 00:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:42:19.031237451 +0000 UTC m=+1.558239777" watchObservedRunningTime="2026-04-14 00:42:19.050286358 +0000 UTC m=+1.577288688" Apr 14 00:42:19.894154 kubelet[2695]: E0414 00:42:19.893997 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:19.896804 kubelet[2695]: E0414 00:42:19.894401 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:19.896804 kubelet[2695]: E0414 00:42:19.894430 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:20.748831 kubelet[2695]: I0414 00:42:20.748766 2695 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 00:42:20.751710 containerd[1585]: time="2026-04-14T00:42:20.751578722Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 00:42:20.752624 kubelet[2695]: I0414 00:42:20.752363 2695 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 00:42:21.639539 kubelet[2695]: E0414 00:42:21.639241 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:21.840103 kubelet[2695]: I0414 00:42:21.836239 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/077dfebe-0690-451f-a807-774e9f5bec3b-lib-modules\") pod \"kube-proxy-sw766\" (UID: \"077dfebe-0690-451f-a807-774e9f5bec3b\") " pod="kube-system/kube-proxy-sw766" Apr 14 00:42:21.893686 kubelet[2695]: I0414 00:42:21.837125 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2c9\" (UniqueName: \"kubernetes.io/projected/077dfebe-0690-451f-a807-774e9f5bec3b-kube-api-access-km2c9\") pod \"kube-proxy-sw766\" (UID: \"077dfebe-0690-451f-a807-774e9f5bec3b\") " pod="kube-system/kube-proxy-sw766" Apr 14 00:42:21.895139 kubelet[2695]: I0414 00:42:21.895108 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/077dfebe-0690-451f-a807-774e9f5bec3b-kube-proxy\") pod \"kube-proxy-sw766\" (UID: \"077dfebe-0690-451f-a807-774e9f5bec3b\") " pod="kube-system/kube-proxy-sw766" Apr 14 00:42:21.895555 kubelet[2695]: I0414 00:42:21.895451 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/077dfebe-0690-451f-a807-774e9f5bec3b-xtables-lock\") pod \"kube-proxy-sw766\" (UID: \"077dfebe-0690-451f-a807-774e9f5bec3b\") " pod="kube-system/kube-proxy-sw766" Apr 14 00:42:21.905242 kubelet[2695]: E0414 00:42:21.905078 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:22.097671 kubelet[2695]: I0414 00:42:22.097414 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6944e3f2-bed2-4e57-a172-62556fb1d78d-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-7dthb\" (UID: \"6944e3f2-bed2-4e57-a172-62556fb1d78d\") " pod="tigera-operator/tigera-operator-6bf85f8dd-7dthb" Apr 14 00:42:22.098027 kubelet[2695]: I0414 00:42:22.097941 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7vc2\" (UniqueName: \"kubernetes.io/projected/6944e3f2-bed2-4e57-a172-62556fb1d78d-kube-api-access-j7vc2\") pod \"tigera-operator-6bf85f8dd-7dthb\" (UID: \"6944e3f2-bed2-4e57-a172-62556fb1d78d\") " pod="tigera-operator/tigera-operator-6bf85f8dd-7dthb" Apr 14 00:42:22.102398 kubelet[2695]: E0414 00:42:22.102257 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:22.105658 containerd[1585]: time="2026-04-14T00:42:22.104711950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sw766,Uid:077dfebe-0690-451f-a807-774e9f5bec3b,Namespace:kube-system,Attempt:0,}" Apr 14 00:42:22.159439 containerd[1585]: time="2026-04-14T00:42:22.158438748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:42:22.159439 containerd[1585]: time="2026-04-14T00:42:22.158767550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:42:22.159439 containerd[1585]: time="2026-04-14T00:42:22.158798547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:22.159439 containerd[1585]: time="2026-04-14T00:42:22.158956650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:22.243047 containerd[1585]: time="2026-04-14T00:42:22.242927858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sw766,Uid:077dfebe-0690-451f-a807-774e9f5bec3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"466e18097e82ff672fd34717dfe0cbaa8b5b69e53d09fed4fa0fde988f17c1f2\"" Apr 14 00:42:22.244749 kubelet[2695]: E0414 00:42:22.244588 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:22.256373 containerd[1585]: time="2026-04-14T00:42:22.256229833Z" level=info msg="CreateContainer within sandbox \"466e18097e82ff672fd34717dfe0cbaa8b5b69e53d09fed4fa0fde988f17c1f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 00:42:22.283628 containerd[1585]: time="2026-04-14T00:42:22.283401365Z" level=info msg="CreateContainer within sandbox \"466e18097e82ff672fd34717dfe0cbaa8b5b69e53d09fed4fa0fde988f17c1f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"079e76101baed4c8be241c8e7dad19310f7fe69d958bd6d3dc27f3121caf267e\"" Apr 14 00:42:22.287661 containerd[1585]: time="2026-04-14T00:42:22.286235886Z" level=info msg="StartContainer for \"079e76101baed4c8be241c8e7dad19310f7fe69d958bd6d3dc27f3121caf267e\"" Apr 14 00:42:22.295974 containerd[1585]: time="2026-04-14T00:42:22.295329192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-7dthb,Uid:6944e3f2-bed2-4e57-a172-62556fb1d78d,Namespace:tigera-operator,Attempt:0,}" Apr 14 00:42:22.411144 containerd[1585]: time="2026-04-14T00:42:22.407619433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:42:22.415027 containerd[1585]: time="2026-04-14T00:42:22.414571805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:42:22.415027 containerd[1585]: time="2026-04-14T00:42:22.414627138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:22.415027 containerd[1585]: time="2026-04-14T00:42:22.414750515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:22.457589 containerd[1585]: time="2026-04-14T00:42:22.455721210Z" level=info msg="StartContainer for \"079e76101baed4c8be241c8e7dad19310f7fe69d958bd6d3dc27f3121caf267e\" returns successfully" Apr 14 00:42:22.515014 containerd[1585]: time="2026-04-14T00:42:22.514796790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-7dthb,Uid:6944e3f2-bed2-4e57-a172-62556fb1d78d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9721b5d3ce65dee40dd01df1d2035e25ff14f8e433067acde83ce096f663fd25\"" Apr 14 00:42:22.521596 containerd[1585]: time="2026-04-14T00:42:22.520383900Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 14 00:42:22.923373 kubelet[2695]: E0414 00:42:22.923291 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:22.928668 kubelet[2695]: E0414 00:42:22.926619 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:22.996180 kubelet[2695]: I0414 00:42:22.996062 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sw766" podStartSLOduration=1.996003867 podStartE2EDuration="1.996003867s" podCreationTimestamp="2026-04-14 00:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:42:22.992443778 +0000 UTC m=+5.519446090" watchObservedRunningTime="2026-04-14 00:42:22.996003867 +0000 UTC m=+5.523006178" Apr 14 00:42:23.134936 kubelet[2695]: E0414 00:42:23.134763 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:23.928363 kubelet[2695]: E0414 00:42:23.928293 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:24.043601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2624719215.mount: Deactivated successfully. Apr 14 00:42:24.932109 kubelet[2695]: E0414 00:42:24.932027 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:25.587248 containerd[1585]: time="2026-04-14T00:42:25.586478454Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:25.588881 containerd[1585]: time="2026-04-14T00:42:25.588773215Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 14 00:42:25.590858 containerd[1585]: time="2026-04-14T00:42:25.590747491Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:25.601688 containerd[1585]: time="2026-04-14T00:42:25.601583398Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:25.605867 containerd[1585]: time="2026-04-14T00:42:25.605425248Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.084973865s" Apr 14 00:42:25.605867 containerd[1585]: time="2026-04-14T00:42:25.605555992Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 14 00:42:25.617088 containerd[1585]: time="2026-04-14T00:42:25.615982325Z" level=info msg="CreateContainer within sandbox \"9721b5d3ce65dee40dd01df1d2035e25ff14f8e433067acde83ce096f663fd25\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 14 00:42:25.651612 containerd[1585]: time="2026-04-14T00:42:25.651443994Z" level=info msg="CreateContainer within sandbox \"9721b5d3ce65dee40dd01df1d2035e25ff14f8e433067acde83ce096f663fd25\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a6eb38295cb6a1333ef9e5cf72d0e400374403311c24556242ccc4d8eceb4de0\"" Apr 14 00:42:25.655462 containerd[1585]: time="2026-04-14T00:42:25.652948151Z" level=info msg="StartContainer for \"a6eb38295cb6a1333ef9e5cf72d0e400374403311c24556242ccc4d8eceb4de0\"" Apr 14 00:42:25.751700 containerd[1585]: time="2026-04-14T00:42:25.751336299Z" level=info msg="StartContainer for \"a6eb38295cb6a1333ef9e5cf72d0e400374403311c24556242ccc4d8eceb4de0\" returns successfully" Apr 14 00:42:29.628820 kubelet[2695]: E0414 00:42:29.628712 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:29.783942 kubelet[2695]: I0414 00:42:29.783799 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-7dthb" podStartSLOduration=5.696242042 podStartE2EDuration="8.78378677s" podCreationTimestamp="2026-04-14 00:42:21 +0000 UTC" firstStartedPulling="2026-04-14 00:42:22.519545055 +0000 UTC m=+5.046547369" lastFinishedPulling="2026-04-14 00:42:25.607089779 +0000 UTC m=+8.134092097" observedRunningTime="2026-04-14 00:42:25.959074112 +0000 UTC m=+8.486076430" watchObservedRunningTime="2026-04-14 00:42:29.78378677 +0000 UTC m=+12.310789091" Apr 14 00:42:31.505840 update_engine[1572]: I20260414 00:42:31.505709 1572 update_attempter.cc:509] Updating boot flags... Apr 14 00:42:31.622778 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3110) Apr 14 00:42:31.648796 sudo[1791]: pam_unix(sudo:session): session closed for user root Apr 14 00:42:31.667861 sshd[1785]: pam_unix(sshd:session): session closed for user core Apr 14 00:42:31.704389 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:33824.service: Deactivated successfully. Apr 14 00:42:31.715645 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 00:42:31.716779 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Apr 14 00:42:31.732355 systemd-logind[1566]: Removed session 7. Apr 14 00:42:31.800541 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3110) Apr 14 00:42:37.006611 kubelet[2695]: I0414 00:42:37.006490 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrbd\" (UniqueName: \"kubernetes.io/projected/66b1ffb9-2535-4e1a-bc14-fa2f90e38271-kube-api-access-5xrbd\") pod \"calico-typha-5f844fcb8c-qct2g\" (UID: \"66b1ffb9-2535-4e1a-bc14-fa2f90e38271\") " pod="calico-system/calico-typha-5f844fcb8c-qct2g" Apr 14 00:42:37.007372 kubelet[2695]: I0414 00:42:37.007263 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66b1ffb9-2535-4e1a-bc14-fa2f90e38271-tigera-ca-bundle\") pod \"calico-typha-5f844fcb8c-qct2g\" (UID: \"66b1ffb9-2535-4e1a-bc14-fa2f90e38271\") " pod="calico-system/calico-typha-5f844fcb8c-qct2g" Apr 14 00:42:37.007372 kubelet[2695]: I0414 00:42:37.007372 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/66b1ffb9-2535-4e1a-bc14-fa2f90e38271-typha-certs\") pod \"calico-typha-5f844fcb8c-qct2g\" (UID: \"66b1ffb9-2535-4e1a-bc14-fa2f90e38271\") " pod="calico-system/calico-typha-5f844fcb8c-qct2g" Apr 14 00:42:37.171483 kubelet[2695]: E0414 00:42:37.171268 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:37.179004 containerd[1585]: time="2026-04-14T00:42:37.178922657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f844fcb8c-qct2g,Uid:66b1ffb9-2535-4e1a-bc14-fa2f90e38271,Namespace:calico-system,Attempt:0,}" Apr 14 00:42:37.235349 kubelet[2695]: I0414 00:42:37.232013 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-policysync\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.235349 kubelet[2695]: I0414 00:42:37.232085 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-bpffs\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.239888 kubelet[2695]: I0414 00:42:37.233259 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-cni-bin-dir\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.306759 kubelet[2695]: I0414 00:42:37.305603 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsqk\" (UniqueName: \"kubernetes.io/projected/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-kube-api-access-qzsqk\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.306759 kubelet[2695]: I0414 00:42:37.305678 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-cni-net-dir\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.306759 kubelet[2695]: I0414 00:42:37.305765 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-node-certs\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.306759 kubelet[2695]: I0414 00:42:37.305848 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-xtables-lock\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.306759 kubelet[2695]: I0414 00:42:37.306065 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-var-lib-calico\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.307030 kubelet[2695]: I0414 00:42:37.306204 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-nodeproc\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.307030 kubelet[2695]: I0414 00:42:37.306306 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-var-run-calico\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.307030 kubelet[2695]: I0414 00:42:37.306402 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-cni-log-dir\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.318391 kubelet[2695]: I0414 00:42:37.316934 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-lib-modules\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.318391 kubelet[2695]: I0414 00:42:37.317261 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-sys-fs\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.318391 kubelet[2695]: I0414 00:42:37.317404 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-flexvol-driver-host\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.318391 kubelet[2695]: I0414 00:42:37.317646 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8e45f92-a582-41a4-bbd4-7ad3a382c76d-tigera-ca-bundle\") pod \"calico-node-df4b4\" (UID: \"c8e45f92-a582-41a4-bbd4-7ad3a382c76d\") " pod="calico-system/calico-node-df4b4" Apr 14 00:42:37.333926 kubelet[2695]: E0414 00:42:37.332963 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:37.379366 containerd[1585]: time="2026-04-14T00:42:37.378948507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:42:37.379626 containerd[1585]: time="2026-04-14T00:42:37.379362602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:42:37.379626 containerd[1585]: time="2026-04-14T00:42:37.379388711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:37.383333 containerd[1585]: time="2026-04-14T00:42:37.383127342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:37.418659 kubelet[2695]: I0414 00:42:37.418449 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/84892692-33db-4109-aafb-76ce1e050199-socket-dir\") pod \"csi-node-driver-nxm2k\" (UID: \"84892692-33db-4109-aafb-76ce1e050199\") " pod="calico-system/csi-node-driver-nxm2k" Apr 14 00:42:37.420644 kubelet[2695]: I0414 00:42:37.420447 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/84892692-33db-4109-aafb-76ce1e050199-varrun\") pod \"csi-node-driver-nxm2k\" (UID: \"84892692-33db-4109-aafb-76ce1e050199\") " pod="calico-system/csi-node-driver-nxm2k" Apr 14 00:42:37.420644 kubelet[2695]: I0414 00:42:37.420613 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84892692-33db-4109-aafb-76ce1e050199-kubelet-dir\") pod \"csi-node-driver-nxm2k\" (UID: \"84892692-33db-4109-aafb-76ce1e050199\") " pod="calico-system/csi-node-driver-nxm2k" Apr 14 00:42:37.420804 kubelet[2695]: I0414 00:42:37.420663 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/84892692-33db-4109-aafb-76ce1e050199-registration-dir\") pod \"csi-node-driver-nxm2k\" (UID: \"84892692-33db-4109-aafb-76ce1e050199\") " pod="calico-system/csi-node-driver-nxm2k" Apr 14 00:42:37.431238 kubelet[2695]: I0414 00:42:37.426981 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm94z\" (UniqueName: \"kubernetes.io/projected/84892692-33db-4109-aafb-76ce1e050199-kube-api-access-bm94z\") pod \"csi-node-driver-nxm2k\" (UID: \"84892692-33db-4109-aafb-76ce1e050199\") " pod="calico-system/csi-node-driver-nxm2k" Apr 14 00:42:37.438842 kubelet[2695]: E0414 00:42:37.437716 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.438842 kubelet[2695]: W0414 00:42:37.437886 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.438842 kubelet[2695]: E0414 00:42:37.437977 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.443039 kubelet[2695]: E0414 00:42:37.442956 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.443206 kubelet[2695]: W0414 00:42:37.443070 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.443206 kubelet[2695]: E0414 00:42:37.443100 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.489176 containerd[1585]: time="2026-04-14T00:42:37.488637032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f844fcb8c-qct2g,Uid:66b1ffb9-2535-4e1a-bc14-fa2f90e38271,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4ab7b0f89e36b55a5604a75fe133a3e777454dd4292ae4642db15ac72a9635b\"" Apr 14 00:42:37.491374 containerd[1585]: time="2026-04-14T00:42:37.491263820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 14 00:42:37.492759 kubelet[2695]: E0414 00:42:37.489707 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:37.528374 kubelet[2695]: E0414 00:42:37.528261 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.528374 kubelet[2695]: W0414 00:42:37.528303 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.528374 kubelet[2695]: E0414 00:42:37.528339 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.529242 kubelet[2695]: E0414 00:42:37.529106 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.529242 kubelet[2695]: W0414 00:42:37.529175 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.529643 kubelet[2695]: E0414 00:42:37.529372 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.530793 kubelet[2695]: E0414 00:42:37.530761 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.531124 kubelet[2695]: W0414 00:42:37.531089 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.532232 kubelet[2695]: E0414 00:42:37.531883 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.532813 kubelet[2695]: E0414 00:42:37.532791 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.533167 kubelet[2695]: W0414 00:42:37.532959 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.533167 kubelet[2695]: E0414 00:42:37.532976 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.533545 kubelet[2695]: E0414 00:42:37.533429 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.533545 kubelet[2695]: W0414 00:42:37.533450 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.533545 kubelet[2695]: E0414 00:42:37.533458 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.534747 kubelet[2695]: E0414 00:42:37.534601 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.534747 kubelet[2695]: W0414 00:42:37.534740 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.534991 kubelet[2695]: E0414 00:42:37.534761 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.536261 kubelet[2695]: E0414 00:42:37.535991 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.536261 kubelet[2695]: W0414 00:42:37.536234 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.536456 kubelet[2695]: E0414 00:42:37.536307 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.537075 kubelet[2695]: E0414 00:42:37.536999 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.537075 kubelet[2695]: W0414 00:42:37.537050 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.537075 kubelet[2695]: E0414 00:42:37.537080 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.537853 kubelet[2695]: E0414 00:42:37.537823 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.537853 kubelet[2695]: W0414 00:42:37.537841 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.537915 kubelet[2695]: E0414 00:42:37.537860 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.538411 kubelet[2695]: E0414 00:42:37.538359 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.538411 kubelet[2695]: W0414 00:42:37.538406 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.538633 kubelet[2695]: E0414 00:42:37.538429 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.539033 kubelet[2695]: E0414 00:42:37.539003 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.539033 kubelet[2695]: W0414 00:42:37.539032 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.539184 kubelet[2695]: E0414 00:42:37.539049 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.539327 kubelet[2695]: E0414 00:42:37.539300 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.539327 kubelet[2695]: W0414 00:42:37.539324 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.539405 kubelet[2695]: E0414 00:42:37.539333 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.539596 kubelet[2695]: E0414 00:42:37.539566 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.539596 kubelet[2695]: W0414 00:42:37.539594 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.539971 kubelet[2695]: E0414 00:42:37.539603 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.540464 kubelet[2695]: E0414 00:42:37.540395 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.540464 kubelet[2695]: W0414 00:42:37.540458 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.541835 kubelet[2695]: E0414 00:42:37.540486 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.541835 kubelet[2695]: E0414 00:42:37.541417 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.541835 kubelet[2695]: W0414 00:42:37.541443 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.541835 kubelet[2695]: E0414 00:42:37.541465 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.542439 kubelet[2695]: E0414 00:42:37.542373 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.542439 kubelet[2695]: W0414 00:42:37.542422 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.542439 kubelet[2695]: E0414 00:42:37.542444 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.543065 kubelet[2695]: E0414 00:42:37.542975 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.543065 kubelet[2695]: W0414 00:42:37.543009 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.543065 kubelet[2695]: E0414 00:42:37.543030 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.543436 kubelet[2695]: E0414 00:42:37.543392 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.543436 kubelet[2695]: W0414 00:42:37.543425 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.543436 kubelet[2695]: E0414 00:42:37.543436 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.545740 kubelet[2695]: E0414 00:42:37.545689 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.545740 kubelet[2695]: W0414 00:42:37.545703 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.545740 kubelet[2695]: E0414 00:42:37.545724 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.546341 kubelet[2695]: E0414 00:42:37.546253 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.546341 kubelet[2695]: W0414 00:42:37.546291 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.546341 kubelet[2695]: E0414 00:42:37.546316 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.546957 kubelet[2695]: E0414 00:42:37.546910 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.546957 kubelet[2695]: W0414 00:42:37.546946 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.547346 kubelet[2695]: E0414 00:42:37.546970 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.548661 kubelet[2695]: E0414 00:42:37.548370 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.548661 kubelet[2695]: W0414 00:42:37.548387 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.548661 kubelet[2695]: E0414 00:42:37.548409 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.548933 kubelet[2695]: E0414 00:42:37.548899 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.548954 kubelet[2695]: W0414 00:42:37.548931 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.548980 kubelet[2695]: E0414 00:42:37.548950 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.549968 kubelet[2695]: E0414 00:42:37.549919 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.549968 kubelet[2695]: W0414 00:42:37.549943 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.549968 kubelet[2695]: E0414 00:42:37.549966 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.550972 kubelet[2695]: E0414 00:42:37.550960 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.550995 kubelet[2695]: W0414 00:42:37.550975 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.550995 kubelet[2695]: E0414 00:42:37.550990 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.564948 kubelet[2695]: E0414 00:42:37.564284 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:37.564948 kubelet[2695]: W0414 00:42:37.564396 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:37.564948 kubelet[2695]: E0414 00:42:37.564428 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:37.693281 containerd[1585]: time="2026-04-14T00:42:37.693056053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-df4b4,Uid:c8e45f92-a582-41a4-bbd4-7ad3a382c76d,Namespace:calico-system,Attempt:0,}" Apr 14 00:42:37.837059 containerd[1585]: time="2026-04-14T00:42:37.835623414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:42:37.837059 containerd[1585]: time="2026-04-14T00:42:37.835682765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:42:37.837059 containerd[1585]: time="2026-04-14T00:42:37.835694790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:37.837059 containerd[1585]: time="2026-04-14T00:42:37.836769146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:42:37.968071 containerd[1585]: time="2026-04-14T00:42:37.967849100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-df4b4,Uid:c8e45f92-a582-41a4-bbd4-7ad3a382c76d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fec8d258e7107ca6a0f0afc5e6160b26b47643f586a014f37cb7f665ecd315d\"" Apr 14 00:42:38.811976 kubelet[2695]: E0414 00:42:38.811644 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:39.273684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount750974949.mount: Deactivated successfully. Apr 14 00:42:40.339040 containerd[1585]: time="2026-04-14T00:42:40.338822617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:40.340213 containerd[1585]: time="2026-04-14T00:42:40.340082955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 14 00:42:40.343255 containerd[1585]: time="2026-04-14T00:42:40.343070356Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:40.349755 containerd[1585]: time="2026-04-14T00:42:40.349697580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:40.352007 containerd[1585]: time="2026-04-14T00:42:40.351875672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.860545629s" Apr 14 00:42:40.352177 containerd[1585]: time="2026-04-14T00:42:40.352056610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 14 00:42:40.355593 containerd[1585]: time="2026-04-14T00:42:40.355134565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 14 00:42:40.403021 containerd[1585]: time="2026-04-14T00:42:40.402724931Z" level=info msg="CreateContainer within sandbox \"a4ab7b0f89e36b55a5604a75fe133a3e777454dd4292ae4642db15ac72a9635b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 14 00:42:40.491974 containerd[1585]: time="2026-04-14T00:42:40.491882951Z" level=info msg="CreateContainer within sandbox \"a4ab7b0f89e36b55a5604a75fe133a3e777454dd4292ae4642db15ac72a9635b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7c455728ec2fb35a431efca6475d55ccd9d3f38491e9d05d72375d21ae9b6e58\"" Apr 14 00:42:40.495113 containerd[1585]: time="2026-04-14T00:42:40.494913644Z" level=info msg="StartContainer for \"7c455728ec2fb35a431efca6475d55ccd9d3f38491e9d05d72375d21ae9b6e58\"" Apr 14 00:42:40.801306 containerd[1585]: time="2026-04-14T00:42:40.801038702Z" level=info msg="StartContainer for \"7c455728ec2fb35a431efca6475d55ccd9d3f38491e9d05d72375d21ae9b6e58\" returns successfully" Apr 14 00:42:40.811928 kubelet[2695]: E0414 00:42:40.811689 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:41.105181 kubelet[2695]: E0414 00:42:41.103785 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:41.192942 kubelet[2695]: E0414 00:42:41.192836 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.193706 kubelet[2695]: W0414 00:42:41.193668 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.193846 kubelet[2695]: E0414 00:42:41.193831 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.194204 kubelet[2695]: E0414 00:42:41.194133 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.194302 kubelet[2695]: W0414 00:42:41.194292 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.194361 kubelet[2695]: E0414 00:42:41.194352 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.194715 kubelet[2695]: E0414 00:42:41.194655 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.194715 kubelet[2695]: W0414 00:42:41.194666 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.194715 kubelet[2695]: E0414 00:42:41.194676 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.195333 kubelet[2695]: E0414 00:42:41.195200 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.195333 kubelet[2695]: W0414 00:42:41.195220 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.195333 kubelet[2695]: E0414 00:42:41.195238 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.196906 kubelet[2695]: E0414 00:42:41.196466 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.196906 kubelet[2695]: W0414 00:42:41.196490 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.196906 kubelet[2695]: E0414 00:42:41.196825 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.199106 kubelet[2695]: E0414 00:42:41.198961 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.199106 kubelet[2695]: W0414 00:42:41.198983 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.199106 kubelet[2695]: E0414 00:42:41.199003 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.200934 kubelet[2695]: E0414 00:42:41.200707 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.200934 kubelet[2695]: W0414 00:42:41.200727 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.200934 kubelet[2695]: E0414 00:42:41.200744 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.205415 kubelet[2695]: E0414 00:42:41.204691 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.205415 kubelet[2695]: W0414 00:42:41.204723 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.205415 kubelet[2695]: E0414 00:42:41.204747 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.206131 kubelet[2695]: E0414 00:42:41.205914 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.206131 kubelet[2695]: W0414 00:42:41.205938 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.206131 kubelet[2695]: E0414 00:42:41.205972 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.206634 kubelet[2695]: E0414 00:42:41.206608 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.206736 kubelet[2695]: W0414 00:42:41.206728 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.206816 kubelet[2695]: E0414 00:42:41.206802 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.207645 kubelet[2695]: E0414 00:42:41.207477 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.207914 kubelet[2695]: W0414 00:42:41.207896 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.207971 kubelet[2695]: E0414 00:42:41.207962 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.208798 kubelet[2695]: E0414 00:42:41.208774 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.208984 kubelet[2695]: W0414 00:42:41.208899 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.208984 kubelet[2695]: E0414 00:42:41.208922 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.210118 kubelet[2695]: E0414 00:42:41.209857 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.210118 kubelet[2695]: W0414 00:42:41.209923 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.210118 kubelet[2695]: E0414 00:42:41.209941 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.211982 kubelet[2695]: E0414 00:42:41.211878 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.211982 kubelet[2695]: W0414 00:42:41.211902 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.211982 kubelet[2695]: E0414 00:42:41.211924 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.215852 kubelet[2695]: E0414 00:42:41.215708 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.215852 kubelet[2695]: W0414 00:42:41.215750 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.215852 kubelet[2695]: E0414 00:42:41.215773 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.234908 kubelet[2695]: E0414 00:42:41.233904 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.234908 kubelet[2695]: W0414 00:42:41.233932 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.234908 kubelet[2695]: E0414 00:42:41.233952 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.241363 kubelet[2695]: E0414 00:42:41.240371 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.241363 kubelet[2695]: W0414 00:42:41.240401 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.241363 kubelet[2695]: E0414 00:42:41.240468 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.242718 kubelet[2695]: E0414 00:42:41.242131 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.242718 kubelet[2695]: W0414 00:42:41.242197 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.242718 kubelet[2695]: E0414 00:42:41.242220 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.243896 kubelet[2695]: E0414 00:42:41.243868 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.245300 kubelet[2695]: W0414 00:42:41.244247 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.245300 kubelet[2695]: E0414 00:42:41.244357 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.248087 kubelet[2695]: E0414 00:42:41.247811 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.248087 kubelet[2695]: W0414 00:42:41.247838 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.248087 kubelet[2695]: E0414 00:42:41.247860 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.252329 kubelet[2695]: E0414 00:42:41.252270 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.252918 kubelet[2695]: W0414 00:42:41.252671 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.252918 kubelet[2695]: E0414 00:42:41.252781 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.255104 kubelet[2695]: E0414 00:42:41.254789 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.255104 kubelet[2695]: W0414 00:42:41.254808 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.255104 kubelet[2695]: E0414 00:42:41.254829 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.258085 kubelet[2695]: E0414 00:42:41.257869 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.258085 kubelet[2695]: W0414 00:42:41.257897 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.258085 kubelet[2695]: E0414 00:42:41.257921 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.260911 kubelet[2695]: E0414 00:42:41.260875 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.261763 kubelet[2695]: W0414 00:42:41.261043 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.261763 kubelet[2695]: E0414 00:42:41.261069 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.262220 kubelet[2695]: E0414 00:42:41.262199 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.262288 kubelet[2695]: W0414 00:42:41.262275 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.262365 kubelet[2695]: E0414 00:42:41.262354 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.263468 kubelet[2695]: E0414 00:42:41.263091 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.263468 kubelet[2695]: W0414 00:42:41.263437 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.264174 kubelet[2695]: E0414 00:42:41.263978 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.265417 kubelet[2695]: E0414 00:42:41.265269 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.265727 kubelet[2695]: W0414 00:42:41.265434 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.265727 kubelet[2695]: E0414 00:42:41.265469 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.266606 kubelet[2695]: E0414 00:42:41.266441 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.266707 kubelet[2695]: W0414 00:42:41.266616 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.266707 kubelet[2695]: E0414 00:42:41.266639 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.268889 kubelet[2695]: E0414 00:42:41.268727 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.269001 kubelet[2695]: W0414 00:42:41.268935 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.269001 kubelet[2695]: E0414 00:42:41.268967 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.269677 kubelet[2695]: E0414 00:42:41.269566 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.269677 kubelet[2695]: W0414 00:42:41.269588 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.269677 kubelet[2695]: E0414 00:42:41.269604 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.272129 kubelet[2695]: E0414 00:42:41.271904 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.272398 kubelet[2695]: W0414 00:42:41.272281 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.272398 kubelet[2695]: E0414 00:42:41.272332 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.273636 kubelet[2695]: E0414 00:42:41.273575 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.273636 kubelet[2695]: W0414 00:42:41.273621 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.273636 kubelet[2695]: E0414 00:42:41.273641 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:41.274008 kubelet[2695]: E0414 00:42:41.273966 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:41.274008 kubelet[2695]: W0414 00:42:41.273986 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:41.274008 kubelet[2695]: E0414 00:42:41.273994 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.110431 kubelet[2695]: I0414 00:42:42.110215 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 00:42:42.111292 kubelet[2695]: E0414 00:42:42.111001 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:42.123646 kubelet[2695]: E0414 00:42:42.123174 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.123646 kubelet[2695]: W0414 00:42:42.123218 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.123646 kubelet[2695]: E0414 00:42:42.123250 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.124591 kubelet[2695]: E0414 00:42:42.124551 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.131252 kubelet[2695]: W0414 00:42:42.130487 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.131252 kubelet[2695]: E0414 00:42:42.130819 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.131827 kubelet[2695]: E0414 00:42:42.131760 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.131827 kubelet[2695]: W0414 00:42:42.131814 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.131970 kubelet[2695]: E0414 00:42:42.131839 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.135848 kubelet[2695]: E0414 00:42:42.134206 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.135848 kubelet[2695]: W0414 00:42:42.134246 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.135848 kubelet[2695]: E0414 00:42:42.134269 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.135848 kubelet[2695]: E0414 00:42:42.134993 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.135848 kubelet[2695]: W0414 00:42:42.135014 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.135848 kubelet[2695]: E0414 00:42:42.135036 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.135848 kubelet[2695]: E0414 00:42:42.135571 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.135848 kubelet[2695]: W0414 00:42:42.135601 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.135848 kubelet[2695]: E0414 00:42:42.135621 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.136455 kubelet[2695]: E0414 00:42:42.136079 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.138194 kubelet[2695]: W0414 00:42:42.136143 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.138194 kubelet[2695]: E0414 00:42:42.138185 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.139414 kubelet[2695]: E0414 00:42:42.139232 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.139718 kubelet[2695]: W0414 00:42:42.139444 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.139718 kubelet[2695]: E0414 00:42:42.139479 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.140463 kubelet[2695]: E0414 00:42:42.139886 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.140463 kubelet[2695]: W0414 00:42:42.139911 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.140463 kubelet[2695]: E0414 00:42:42.139941 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.140463 kubelet[2695]: E0414 00:42:42.140442 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.140463 kubelet[2695]: W0414 00:42:42.140463 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.140699 kubelet[2695]: E0414 00:42:42.140483 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.142039 kubelet[2695]: E0414 00:42:42.141638 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.142039 kubelet[2695]: W0414 00:42:42.141844 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.142039 kubelet[2695]: E0414 00:42:42.141896 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.142428 kubelet[2695]: E0414 00:42:42.142388 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.142428 kubelet[2695]: W0414 00:42:42.142424 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.142740 kubelet[2695]: E0414 00:42:42.142443 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.143875 kubelet[2695]: E0414 00:42:42.143632 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.144039 kubelet[2695]: W0414 00:42:42.143898 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.144039 kubelet[2695]: E0414 00:42:42.143928 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.144964 kubelet[2695]: E0414 00:42:42.144904 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.144964 kubelet[2695]: W0414 00:42:42.144948 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.145084 kubelet[2695]: E0414 00:42:42.144969 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.145699 kubelet[2695]: E0414 00:42:42.145646 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.145699 kubelet[2695]: W0414 00:42:42.145684 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.145939 kubelet[2695]: E0414 00:42:42.145703 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.203302 kubelet[2695]: E0414 00:42:42.203136 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.203302 kubelet[2695]: W0414 00:42:42.203271 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.203302 kubelet[2695]: E0414 00:42:42.203302 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.205260 kubelet[2695]: E0414 00:42:42.205122 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.205748 kubelet[2695]: W0414 00:42:42.205303 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.205748 kubelet[2695]: E0414 00:42:42.205403 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.208064 kubelet[2695]: E0414 00:42:42.207734 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.208064 kubelet[2695]: W0414 00:42:42.207759 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.208064 kubelet[2695]: E0414 00:42:42.207782 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.208826 kubelet[2695]: E0414 00:42:42.208796 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.208826 kubelet[2695]: W0414 00:42:42.208816 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.208826 kubelet[2695]: E0414 00:42:42.208836 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.208826 kubelet[2695]: E0414 00:42:42.209690 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.208826 kubelet[2695]: W0414 00:42:42.209716 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.208826 kubelet[2695]: E0414 00:42:42.209738 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.210723 kubelet[2695]: E0414 00:42:42.210124 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.210723 kubelet[2695]: W0414 00:42:42.210134 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.210723 kubelet[2695]: E0414 00:42:42.210144 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.211606 kubelet[2695]: E0414 00:42:42.211006 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.211606 kubelet[2695]: W0414 00:42:42.211026 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.211606 kubelet[2695]: E0414 00:42:42.211092 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.211767 kubelet[2695]: E0414 00:42:42.211681 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.211767 kubelet[2695]: W0414 00:42:42.211696 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.211767 kubelet[2695]: E0414 00:42:42.211713 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.216409 kubelet[2695]: E0414 00:42:42.214846 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.216409 kubelet[2695]: W0414 00:42:42.214908 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.216409 kubelet[2695]: E0414 00:42:42.214931 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.220756 kubelet[2695]: E0414 00:42:42.220362 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.220756 kubelet[2695]: W0414 00:42:42.220389 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.220756 kubelet[2695]: E0414 00:42:42.220410 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.221445 kubelet[2695]: E0414 00:42:42.221038 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.221445 kubelet[2695]: W0414 00:42:42.221077 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.221445 kubelet[2695]: E0414 00:42:42.221089 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.223422 kubelet[2695]: E0414 00:42:42.222093 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.223422 kubelet[2695]: W0414 00:42:42.222138 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.223422 kubelet[2695]: E0414 00:42:42.222196 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.223422 kubelet[2695]: E0414 00:42:42.222697 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.223422 kubelet[2695]: W0414 00:42:42.222718 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.223422 kubelet[2695]: E0414 00:42:42.222733 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.224384 kubelet[2695]: E0414 00:42:42.224332 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.224384 kubelet[2695]: W0414 00:42:42.224371 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.224649 kubelet[2695]: E0414 00:42:42.224388 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.227663 kubelet[2695]: E0414 00:42:42.227449 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.227663 kubelet[2695]: W0414 00:42:42.227490 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.227663 kubelet[2695]: E0414 00:42:42.227589 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.229132 kubelet[2695]: E0414 00:42:42.229039 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.229132 kubelet[2695]: W0414 00:42:42.229119 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.229309 kubelet[2695]: E0414 00:42:42.229200 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.231109 kubelet[2695]: E0414 00:42:42.230216 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.231109 kubelet[2695]: W0414 00:42:42.230438 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.231109 kubelet[2695]: E0414 00:42:42.230613 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.233403 kubelet[2695]: E0414 00:42:42.232047 2695 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 00:42:42.233403 kubelet[2695]: W0414 00:42:42.232249 2695 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 00:42:42.233403 kubelet[2695]: E0414 00:42:42.232296 2695 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 00:42:42.367098 containerd[1585]: time="2026-04-14T00:42:42.366829808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:42.370625 containerd[1585]: time="2026-04-14T00:42:42.368930733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 14 00:42:42.379263 containerd[1585]: time="2026-04-14T00:42:42.379084131Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:42.392409 containerd[1585]: time="2026-04-14T00:42:42.392311618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:42.393876 containerd[1585]: time="2026-04-14T00:42:42.393476898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.037123673s" Apr 14 00:42:42.393876 containerd[1585]: time="2026-04-14T00:42:42.393877443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 14 00:42:42.434847 containerd[1585]: time="2026-04-14T00:42:42.434771856Z" level=info msg="CreateContainer within sandbox \"4fec8d258e7107ca6a0f0afc5e6160b26b47643f586a014f37cb7f665ecd315d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 14 00:42:42.517041 containerd[1585]: time="2026-04-14T00:42:42.516893981Z" level=info msg="CreateContainer within sandbox \"4fec8d258e7107ca6a0f0afc5e6160b26b47643f586a014f37cb7f665ecd315d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ad54ad129f779468d5bd39048c8b14509cd950364d2c54b7d06934f7b52184eb\"" Apr 14 00:42:42.520777 containerd[1585]: time="2026-04-14T00:42:42.519457842Z" level=info msg="StartContainer for \"ad54ad129f779468d5bd39048c8b14509cd950364d2c54b7d06934f7b52184eb\"" Apr 14 00:42:42.677258 containerd[1585]: time="2026-04-14T00:42:42.676728330Z" level=info msg="StartContainer for \"ad54ad129f779468d5bd39048c8b14509cd950364d2c54b7d06934f7b52184eb\" returns successfully" Apr 14 00:42:42.811710 kubelet[2695]: E0414 00:42:42.810692 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:42.825710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad54ad129f779468d5bd39048c8b14509cd950364d2c54b7d06934f7b52184eb-rootfs.mount: Deactivated successfully. Apr 14 00:42:42.844732 containerd[1585]: time="2026-04-14T00:42:42.844022318Z" level=info msg="shim disconnected" id=ad54ad129f779468d5bd39048c8b14509cd950364d2c54b7d06934f7b52184eb namespace=k8s.io Apr 14 00:42:42.844732 containerd[1585]: time="2026-04-14T00:42:42.844289061Z" level=warning msg="cleaning up after shim disconnected" id=ad54ad129f779468d5bd39048c8b14509cd950364d2c54b7d06934f7b52184eb namespace=k8s.io Apr 14 00:42:42.844732 containerd[1585]: time="2026-04-14T00:42:42.844313434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:42:43.130420 containerd[1585]: time="2026-04-14T00:42:43.129812409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 14 00:42:43.197065 kubelet[2695]: I0414 00:42:43.196912 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f844fcb8c-qct2g" podStartSLOduration=4.333458167 podStartE2EDuration="7.196857862s" podCreationTimestamp="2026-04-14 00:42:36 +0000 UTC" firstStartedPulling="2026-04-14 00:42:37.490938255 +0000 UTC m=+20.017940563" lastFinishedPulling="2026-04-14 00:42:40.354337941 +0000 UTC m=+22.881340258" observedRunningTime="2026-04-14 00:42:41.177854829 +0000 UTC m=+23.704857148" watchObservedRunningTime="2026-04-14 00:42:43.196857862 +0000 UTC m=+25.723860182" Apr 14 00:42:44.813107 kubelet[2695]: E0414 00:42:44.811957 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:46.811320 kubelet[2695]: E0414 00:42:46.811050 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:48.811698 kubelet[2695]: E0414 00:42:48.811613 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:50.811215 kubelet[2695]: E0414 00:42:50.810972 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:52.811484 kubelet[2695]: E0414 00:42:52.811071 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:52.908254 kubelet[2695]: I0414 00:42:52.907608 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 00:42:52.908254 kubelet[2695]: E0414 00:42:52.908248 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:53.286392 kubelet[2695]: E0414 00:42:53.284163 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:42:54.811414 kubelet[2695]: E0414 00:42:54.811135 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:55.669047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073718857.mount: Deactivated successfully. Apr 14 00:42:55.771881 containerd[1585]: time="2026-04-14T00:42:55.771182986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:55.778017 containerd[1585]: time="2026-04-14T00:42:55.777901991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 14 00:42:55.785446 containerd[1585]: time="2026-04-14T00:42:55.784404318Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:55.795895 containerd[1585]: time="2026-04-14T00:42:55.795776253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:42:55.800998 containerd[1585]: time="2026-04-14T00:42:55.800864732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.670985754s" Apr 14 00:42:55.800998 containerd[1585]: time="2026-04-14T00:42:55.800999928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 14 00:42:55.890434 containerd[1585]: time="2026-04-14T00:42:55.888047066Z" level=info msg="CreateContainer within sandbox \"4fec8d258e7107ca6a0f0afc5e6160b26b47643f586a014f37cb7f665ecd315d\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 14 00:42:56.148094 containerd[1585]: time="2026-04-14T00:42:56.147002654Z" level=info msg="CreateContainer within sandbox \"4fec8d258e7107ca6a0f0afc5e6160b26b47643f586a014f37cb7f665ecd315d\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"8bea9ab1b38932c21bbea1458bda7d1f9435e75e7eca8b54318919ffda54e39f\"" Apr 14 00:42:56.153051 containerd[1585]: time="2026-04-14T00:42:56.152233240Z" level=info msg="StartContainer for \"8bea9ab1b38932c21bbea1458bda7d1f9435e75e7eca8b54318919ffda54e39f\"" Apr 14 00:42:56.682974 containerd[1585]: time="2026-04-14T00:42:56.681062033Z" level=info msg="StartContainer for \"8bea9ab1b38932c21bbea1458bda7d1f9435e75e7eca8b54318919ffda54e39f\" returns successfully" Apr 14 00:42:56.814157 kubelet[2695]: E0414 00:42:56.813083 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:42:56.893177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bea9ab1b38932c21bbea1458bda7d1f9435e75e7eca8b54318919ffda54e39f-rootfs.mount: Deactivated successfully. Apr 14 00:42:56.949285 containerd[1585]: time="2026-04-14T00:42:56.947423564Z" level=info msg="shim disconnected" id=8bea9ab1b38932c21bbea1458bda7d1f9435e75e7eca8b54318919ffda54e39f namespace=k8s.io Apr 14 00:42:56.949285 containerd[1585]: time="2026-04-14T00:42:56.947762791Z" level=warning msg="cleaning up after shim disconnected" id=8bea9ab1b38932c21bbea1458bda7d1f9435e75e7eca8b54318919ffda54e39f namespace=k8s.io Apr 14 00:42:56.949285 containerd[1585]: time="2026-04-14T00:42:56.947782655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:42:57.401972 containerd[1585]: time="2026-04-14T00:42:57.400729685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 14 00:42:58.817999 kubelet[2695]: E0414 00:42:58.816683 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:43:00.811842 kubelet[2695]: E0414 00:43:00.811134 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:43:02.812971 kubelet[2695]: E0414 00:43:02.811575 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:43:04.304838 containerd[1585]: time="2026-04-14T00:43:04.301080809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:04.306927 containerd[1585]: time="2026-04-14T00:43:04.306166220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 14 00:43:04.321888 containerd[1585]: time="2026-04-14T00:43:04.321765368Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:04.400041 containerd[1585]: time="2026-04-14T00:43:04.399406942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:04.405284 containerd[1585]: time="2026-04-14T00:43:04.404090346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 7.003162475s" Apr 14 00:43:04.405284 containerd[1585]: time="2026-04-14T00:43:04.404188243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 14 00:43:04.432772 containerd[1585]: time="2026-04-14T00:43:04.430287125Z" level=info msg="CreateContainer within sandbox \"4fec8d258e7107ca6a0f0afc5e6160b26b47643f586a014f37cb7f665ecd315d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 14 00:43:04.478625 containerd[1585]: time="2026-04-14T00:43:04.477815568Z" level=info msg="CreateContainer within sandbox \"4fec8d258e7107ca6a0f0afc5e6160b26b47643f586a014f37cb7f665ecd315d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"10390f964e78b555d7a438142ad7d44dbf41e7b4da735bf9c8e38470c85b984b\"" Apr 14 00:43:04.488325 containerd[1585]: time="2026-04-14T00:43:04.484073079Z" level=info msg="StartContainer for \"10390f964e78b555d7a438142ad7d44dbf41e7b4da735bf9c8e38470c85b984b\"" Apr 14 00:43:04.586474 systemd[1]: run-containerd-runc-k8s.io-10390f964e78b555d7a438142ad7d44dbf41e7b4da735bf9c8e38470c85b984b-runc.rsIrq0.mount: Deactivated successfully. Apr 14 00:43:04.683426 containerd[1585]: time="2026-04-14T00:43:04.683116002Z" level=info msg="StartContainer for \"10390f964e78b555d7a438142ad7d44dbf41e7b4da735bf9c8e38470c85b984b\" returns successfully" Apr 14 00:43:04.815867 kubelet[2695]: E0414 00:43:04.812018 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:43:06.234962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10390f964e78b555d7a438142ad7d44dbf41e7b4da735bf9c8e38470c85b984b-rootfs.mount: Deactivated successfully. Apr 14 00:43:06.239718 containerd[1585]: time="2026-04-14T00:43:06.239332548Z" level=info msg="shim disconnected" id=10390f964e78b555d7a438142ad7d44dbf41e7b4da735bf9c8e38470c85b984b namespace=k8s.io Apr 14 00:43:06.239718 containerd[1585]: time="2026-04-14T00:43:06.239435234Z" level=warning msg="cleaning up after shim disconnected" id=10390f964e78b555d7a438142ad7d44dbf41e7b4da735bf9c8e38470c85b984b namespace=k8s.io Apr 14 00:43:06.239718 containerd[1585]: time="2026-04-14T00:43:06.239446583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 00:43:06.248588 kubelet[2695]: I0414 00:43:06.247615 2695 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 14 00:43:06.288147 containerd[1585]: time="2026-04-14T00:43:06.288004171Z" level=warning msg="cleanup warnings time=\"2026-04-14T00:43:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 00:43:06.587042 kubelet[2695]: I0414 00:43:06.586559 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drgl4\" (UniqueName: \"kubernetes.io/projected/7a147b57-8111-4288-bc48-06e9f79fcd93-kube-api-access-drgl4\") pod \"calico-apiserver-84b6745c75-8x5bs\" (UID: \"7a147b57-8111-4288-bc48-06e9f79fcd93\") " pod="calico-system/calico-apiserver-84b6745c75-8x5bs" Apr 14 00:43:06.587042 kubelet[2695]: I0414 00:43:06.586830 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2sh2\" (UniqueName: \"kubernetes.io/projected/a72a045a-2130-4394-bddf-040f07946381-kube-api-access-k2sh2\") pod \"whisker-596d75bdd-w8wwn\" (UID: \"a72a045a-2130-4394-bddf-040f07946381\") " pod="calico-system/whisker-596d75bdd-w8wwn" Apr 14 00:43:06.587042 kubelet[2695]: I0414 00:43:06.586872 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq5jr\" (UniqueName: \"kubernetes.io/projected/9b6fd022-3c9d-4e45-8685-b71788e63101-kube-api-access-rq5jr\") pod \"calico-apiserver-84b6745c75-9h22h\" (UID: \"9b6fd022-3c9d-4e45-8685-b71788e63101\") " pod="calico-system/calico-apiserver-84b6745c75-9h22h" Apr 14 00:43:06.587042 kubelet[2695]: I0414 00:43:06.586898 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d6t9\" (UniqueName: \"kubernetes.io/projected/d8b6dcb6-76de-4cca-bc53-2b56358df948-kube-api-access-5d6t9\") pod \"coredns-674b8bbfcf-kbpl9\" (UID: \"d8b6dcb6-76de-4cca-bc53-2b56358df948\") " pod="kube-system/coredns-674b8bbfcf-kbpl9" Apr 14 00:43:06.587042 kubelet[2695]: I0414 00:43:06.586923 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b6fd022-3c9d-4e45-8685-b71788e63101-calico-apiserver-certs\") pod \"calico-apiserver-84b6745c75-9h22h\" (UID: \"9b6fd022-3c9d-4e45-8685-b71788e63101\") " pod="calico-system/calico-apiserver-84b6745c75-9h22h" Apr 14 00:43:06.587698 kubelet[2695]: I0414 00:43:06.587042 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a72a045a-2130-4394-bddf-040f07946381-whisker-backend-key-pair\") pod \"whisker-596d75bdd-w8wwn\" (UID: \"a72a045a-2130-4394-bddf-040f07946381\") " pod="calico-system/whisker-596d75bdd-w8wwn" Apr 14 00:43:06.587698 kubelet[2695]: I0414 00:43:06.587079 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a147b57-8111-4288-bc48-06e9f79fcd93-calico-apiserver-certs\") pod \"calico-apiserver-84b6745c75-8x5bs\" (UID: \"7a147b57-8111-4288-bc48-06e9f79fcd93\") " pod="calico-system/calico-apiserver-84b6745c75-8x5bs" Apr 14 00:43:06.587698 kubelet[2695]: I0414 00:43:06.587108 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0177e648-4be2-489a-8d4b-4fbf09efab64-tigera-ca-bundle\") pod \"calico-kube-controllers-66798d99fc-kp248\" (UID: \"0177e648-4be2-489a-8d4b-4fbf09efab64\") " pod="calico-system/calico-kube-controllers-66798d99fc-kp248" Apr 14 00:43:06.587698 kubelet[2695]: I0414 00:43:06.587159 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a72a045a-2130-4394-bddf-040f07946381-nginx-config\") pod \"whisker-596d75bdd-w8wwn\" (UID: \"a72a045a-2130-4394-bddf-040f07946381\") " pod="calico-system/whisker-596d75bdd-w8wwn" Apr 14 00:43:06.587698 kubelet[2695]: I0414 00:43:06.587219 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a72a045a-2130-4394-bddf-040f07946381-whisker-ca-bundle\") pod \"whisker-596d75bdd-w8wwn\" (UID: \"a72a045a-2130-4394-bddf-040f07946381\") " pod="calico-system/whisker-596d75bdd-w8wwn" Apr 14 00:43:06.587922 kubelet[2695]: I0414 00:43:06.587330 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0cb2ee9-3046-46eb-8cd5-09888325a08a-config\") pod \"goldmane-5b85766d88-vdfvw\" (UID: \"c0cb2ee9-3046-46eb-8cd5-09888325a08a\") " pod="calico-system/goldmane-5b85766d88-vdfvw" Apr 14 00:43:06.587922 kubelet[2695]: I0414 00:43:06.587356 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0cb2ee9-3046-46eb-8cd5-09888325a08a-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-vdfvw\" (UID: \"c0cb2ee9-3046-46eb-8cd5-09888325a08a\") " pod="calico-system/goldmane-5b85766d88-vdfvw" Apr 14 00:43:06.587922 kubelet[2695]: I0414 00:43:06.587378 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c0cb2ee9-3046-46eb-8cd5-09888325a08a-goldmane-key-pair\") pod \"goldmane-5b85766d88-vdfvw\" (UID: \"c0cb2ee9-3046-46eb-8cd5-09888325a08a\") " pod="calico-system/goldmane-5b85766d88-vdfvw" Apr 14 00:43:06.587922 kubelet[2695]: I0414 00:43:06.587409 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l22rl\" (UniqueName: \"kubernetes.io/projected/654da388-0013-4ba6-80c1-00c7d3ddbbd4-kube-api-access-l22rl\") pod \"coredns-674b8bbfcf-zz9l6\" (UID: \"654da388-0013-4ba6-80c1-00c7d3ddbbd4\") " pod="kube-system/coredns-674b8bbfcf-zz9l6" Apr 14 00:43:06.587922 kubelet[2695]: I0414 00:43:06.587433 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjk2\" (UniqueName: \"kubernetes.io/projected/0177e648-4be2-489a-8d4b-4fbf09efab64-kube-api-access-jvjk2\") pod \"calico-kube-controllers-66798d99fc-kp248\" (UID: \"0177e648-4be2-489a-8d4b-4fbf09efab64\") " pod="calico-system/calico-kube-controllers-66798d99fc-kp248" Apr 14 00:43:06.588068 kubelet[2695]: I0414 00:43:06.587456 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/654da388-0013-4ba6-80c1-00c7d3ddbbd4-config-volume\") pod \"coredns-674b8bbfcf-zz9l6\" (UID: \"654da388-0013-4ba6-80c1-00c7d3ddbbd4\") " pod="kube-system/coredns-674b8bbfcf-zz9l6" Apr 14 00:43:06.588068 kubelet[2695]: I0414 00:43:06.587475 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8b6dcb6-76de-4cca-bc53-2b56358df948-config-volume\") pod \"coredns-674b8bbfcf-kbpl9\" (UID: \"d8b6dcb6-76de-4cca-bc53-2b56358df948\") " pod="kube-system/coredns-674b8bbfcf-kbpl9" Apr 14 00:43:06.594177 kubelet[2695]: I0414 00:43:06.594004 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54w4q\" (UniqueName: \"kubernetes.io/projected/c0cb2ee9-3046-46eb-8cd5-09888325a08a-kube-api-access-54w4q\") pod \"goldmane-5b85766d88-vdfvw\" (UID: \"c0cb2ee9-3046-46eb-8cd5-09888325a08a\") " pod="calico-system/goldmane-5b85766d88-vdfvw" Apr 14 00:43:06.661459 containerd[1585]: time="2026-04-14T00:43:06.658880464Z" level=info msg="CreateContainer within sandbox \"4fec8d258e7107ca6a0f0afc5e6160b26b47643f586a014f37cb7f665ecd315d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 14 00:43:06.737767 containerd[1585]: time="2026-04-14T00:43:06.737420365Z" level=info msg="CreateContainer within sandbox \"4fec8d258e7107ca6a0f0afc5e6160b26b47643f586a014f37cb7f665ecd315d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"46d6896910b34cd1a5328c2c20ccf64d51a78e703be3ef6bdb9787362a5335f8\"" Apr 14 00:43:06.745100 containerd[1585]: time="2026-04-14T00:43:06.744988680Z" level=info msg="StartContainer for \"46d6896910b34cd1a5328c2c20ccf64d51a78e703be3ef6bdb9787362a5335f8\"" Apr 14 00:43:06.794614 kubelet[2695]: E0414 00:43:06.793852 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:06.800664 containerd[1585]: time="2026-04-14T00:43:06.799919417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kbpl9,Uid:d8b6dcb6-76de-4cca-bc53-2b56358df948,Namespace:kube-system,Attempt:0,}" Apr 14 00:43:06.836077 containerd[1585]: time="2026-04-14T00:43:06.835977413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxm2k,Uid:84892692-33db-4109-aafb-76ce1e050199,Namespace:calico-system,Attempt:0,}" Apr 14 00:43:06.837000 containerd[1585]: time="2026-04-14T00:43:06.836734885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6745c75-9h22h,Uid:9b6fd022-3c9d-4e45-8685-b71788e63101,Namespace:calico-system,Attempt:0,}" Apr 14 00:43:06.867195 containerd[1585]: time="2026-04-14T00:43:06.865753538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596d75bdd-w8wwn,Uid:a72a045a-2130-4394-bddf-040f07946381,Namespace:calico-system,Attempt:0,}" Apr 14 00:43:06.875391 containerd[1585]: time="2026-04-14T00:43:06.872820739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-vdfvw,Uid:c0cb2ee9-3046-46eb-8cd5-09888325a08a,Namespace:calico-system,Attempt:0,}" Apr 14 00:43:06.995033 kubelet[2695]: E0414 00:43:06.994975 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:07.020164 containerd[1585]: time="2026-04-14T00:43:07.020014928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6745c75-8x5bs,Uid:7a147b57-8111-4288-bc48-06e9f79fcd93,Namespace:calico-system,Attempt:0,}" Apr 14 00:43:07.022722 containerd[1585]: time="2026-04-14T00:43:07.020877994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zz9l6,Uid:654da388-0013-4ba6-80c1-00c7d3ddbbd4,Namespace:kube-system,Attempt:0,}" Apr 14 00:43:07.082039 containerd[1585]: time="2026-04-14T00:43:07.080766450Z" level=info msg="StartContainer for \"46d6896910b34cd1a5328c2c20ccf64d51a78e703be3ef6bdb9787362a5335f8\" returns successfully" Apr 14 00:43:07.092261 containerd[1585]: time="2026-04-14T00:43:07.091826193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66798d99fc-kp248,Uid:0177e648-4be2-489a-8d4b-4fbf09efab64,Namespace:calico-system,Attempt:0,}" Apr 14 00:43:07.684146 kubelet[2695]: I0414 00:43:07.683967 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-df4b4" podStartSLOduration=4.243269223 podStartE2EDuration="30.683925346s" podCreationTimestamp="2026-04-14 00:42:37 +0000 UTC" firstStartedPulling="2026-04-14 00:42:37.970971017 +0000 UTC m=+20.497973326" lastFinishedPulling="2026-04-14 00:43:04.41162713 +0000 UTC m=+46.938629449" observedRunningTime="2026-04-14 00:43:07.682973602 +0000 UTC m=+50.209975925" watchObservedRunningTime="2026-04-14 00:43:07.683925346 +0000 UTC m=+50.210927675" Apr 14 00:43:07.978708 containerd[1585]: time="2026-04-14T00:43:07.977903482Z" level=error msg="Failed to destroy network for sandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:07.979386 containerd[1585]: time="2026-04-14T00:43:07.979123076Z" level=error msg="encountered an error cleaning up failed sandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:07.984981 containerd[1585]: time="2026-04-14T00:43:07.983713973Z" level=error msg="Failed to destroy network for sandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:07.988065 containerd[1585]: time="2026-04-14T00:43:07.987964411Z" level=error msg="encountered an error cleaning up failed sandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:07.992768 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f-shm.mount: Deactivated successfully. Apr 14 00:43:08.016647 containerd[1585]: time="2026-04-14T00:43:08.015628587Z" level=error msg="Failed to destroy network for sandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.021627 containerd[1585]: time="2026-04-14T00:43:08.021570871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxm2k,Uid:84892692-33db-4109-aafb-76ce1e050199,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.024003 containerd[1585]: time="2026-04-14T00:43:08.022976778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6745c75-9h22h,Uid:9b6fd022-3c9d-4e45-8685-b71788e63101,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.028159 containerd[1585]: time="2026-04-14T00:43:08.027932430Z" level=error msg="encountered an error cleaning up failed sandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.028159 containerd[1585]: time="2026-04-14T00:43:08.028066476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-vdfvw,Uid:c0cb2ee9-3046-46eb-8cd5-09888325a08a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.061470 kubelet[2695]: E0414 00:43:08.061351 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.062804 kubelet[2695]: E0414 00:43:08.062102 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.064328 kubelet[2695]: E0414 00:43:08.062186 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.065427 kubelet[2695]: E0414 00:43:08.064765 2695 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-vdfvw" Apr 14 00:43:08.065427 kubelet[2695]: E0414 00:43:08.064809 2695 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-vdfvw" Apr 14 00:43:08.065914 kubelet[2695]: E0414 00:43:08.065667 2695 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84b6745c75-9h22h" Apr 14 00:43:08.065914 kubelet[2695]: E0414 00:43:08.065784 2695 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84b6745c75-9h22h" Apr 14 00:43:08.066461 kubelet[2695]: E0414 00:43:08.066299 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-vdfvw_calico-system(c0cb2ee9-3046-46eb-8cd5-09888325a08a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-vdfvw_calico-system(c0cb2ee9-3046-46eb-8cd5-09888325a08a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-vdfvw" podUID="c0cb2ee9-3046-46eb-8cd5-09888325a08a" Apr 14 00:43:08.066461 kubelet[2695]: E0414 00:43:08.066375 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b6745c75-9h22h_calico-system(9b6fd022-3c9d-4e45-8685-b71788e63101)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b6745c75-9h22h_calico-system(9b6fd022-3c9d-4e45-8685-b71788e63101)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84b6745c75-9h22h" podUID="9b6fd022-3c9d-4e45-8685-b71788e63101" Apr 14 00:43:08.067997 kubelet[2695]: E0414 00:43:08.066582 2695 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nxm2k" Apr 14 00:43:08.068971 kubelet[2695]: E0414 00:43:08.068100 2695 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nxm2k" Apr 14 00:43:08.068971 kubelet[2695]: E0414 00:43:08.068608 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nxm2k_calico-system(84892692-33db-4109-aafb-76ce1e050199)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nxm2k_calico-system(84892692-33db-4109-aafb-76ce1e050199)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nxm2k" podUID="84892692-33db-4109-aafb-76ce1e050199" Apr 14 00:43:08.138840 containerd[1585]: time="2026-04-14T00:43:08.138159069Z" level=error msg="Failed to destroy network for sandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.139444 containerd[1585]: time="2026-04-14T00:43:08.139179604Z" level=error msg="encountered an error cleaning up failed sandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.139675 containerd[1585]: time="2026-04-14T00:43:08.139604681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596d75bdd-w8wwn,Uid:a72a045a-2130-4394-bddf-040f07946381,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.141197 kubelet[2695]: E0414 00:43:08.140457 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.141197 kubelet[2695]: E0414 00:43:08.140652 2695 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-596d75bdd-w8wwn" Apr 14 00:43:08.141197 kubelet[2695]: E0414 00:43:08.140682 2695 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-596d75bdd-w8wwn" Apr 14 00:43:08.144937 kubelet[2695]: E0414 00:43:08.140749 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-596d75bdd-w8wwn_calico-system(a72a045a-2130-4394-bddf-040f07946381)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-596d75bdd-w8wwn_calico-system(a72a045a-2130-4394-bddf-040f07946381)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-596d75bdd-w8wwn" podUID="a72a045a-2130-4394-bddf-040f07946381" Apr 14 00:43:08.160281 containerd[1585]: time="2026-04-14T00:43:08.160184012Z" level=error msg="Failed to destroy network for sandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.161194 containerd[1585]: time="2026-04-14T00:43:08.161156539Z" level=error msg="encountered an error cleaning up failed sandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.165195 containerd[1585]: time="2026-04-14T00:43:08.165123891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kbpl9,Uid:d8b6dcb6-76de-4cca-bc53-2b56358df948,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.168573 kubelet[2695]: E0414 00:43:08.167348 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.170689 kubelet[2695]: E0414 00:43:08.170337 2695 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kbpl9" Apr 14 00:43:08.173156 kubelet[2695]: E0414 00:43:08.172035 2695 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kbpl9" Apr 14 00:43:08.174034 kubelet[2695]: E0414 00:43:08.173929 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kbpl9_kube-system(d8b6dcb6-76de-4cca-bc53-2b56358df948)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kbpl9_kube-system(d8b6dcb6-76de-4cca-bc53-2b56358df948)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kbpl9" podUID="d8b6dcb6-76de-4cca-bc53-2b56358df948" Apr 14 00:43:08.237722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701-shm.mount: Deactivated successfully. Apr 14 00:43:08.237969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc-shm.mount: Deactivated successfully. Apr 14 00:43:08.238088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8-shm.mount: Deactivated successfully. Apr 14 00:43:08.238188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222-shm.mount: Deactivated successfully. Apr 14 00:43:08.637813 kubelet[2695]: I0414 00:43:08.636827 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:08.655658 kubelet[2695]: I0414 00:43:08.652849 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:08.669196 containerd[1585]: time="2026-04-14T00:43:08.669124614Z" level=info msg="StopPodSandbox for \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\"" Apr 14 00:43:08.674595 kubelet[2695]: I0414 00:43:08.674391 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:08.675824 containerd[1585]: time="2026-04-14T00:43:08.673702258Z" level=info msg="StopPodSandbox for \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\"" Apr 14 00:43:08.675824 containerd[1585]: time="2026-04-14T00:43:08.675754174Z" level=info msg="StopPodSandbox for \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\"" Apr 14 00:43:08.678458 containerd[1585]: time="2026-04-14T00:43:08.677861486Z" level=info msg="Ensure that sandbox 5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701 in task-service has been cleanup successfully" Apr 14 00:43:08.679725 containerd[1585]: time="2026-04-14T00:43:08.678979603Z" level=info msg="Ensure that sandbox 6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc in task-service has been cleanup successfully" Apr 14 00:43:08.679725 containerd[1585]: time="2026-04-14T00:43:08.679037222Z" level=info msg="Ensure that sandbox ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f in task-service has been cleanup successfully" Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.432 [INFO][3842] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.432 [INFO][3842] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" iface="eth0" netns="/var/run/netns/cni-79706c8a-c557-6334-c55a-db4c48b7076b" Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.433 [INFO][3842] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" iface="eth0" netns="/var/run/netns/cni-79706c8a-c557-6334-c55a-db4c48b7076b" Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.436 [INFO][3842] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" iface="eth0" netns="/var/run/netns/cni-79706c8a-c557-6334-c55a-db4c48b7076b" Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.437 [INFO][3842] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.437 [INFO][3842] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.608 [INFO][3925] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" HandleID="k8s-pod-network.2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" Workload="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.609 [INFO][3925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.609 [INFO][3925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.632 [WARNING][3925] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" HandleID="k8s-pod-network.2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" Workload="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.633 [INFO][3925] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" HandleID="k8s-pod-network.2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" Workload="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.643 [INFO][3925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:08.695639 containerd[1585]: 2026-04-14 00:43:08.664 [INFO][3842] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463" Apr 14 00:43:08.699478 kubelet[2695]: I0414 00:43:08.699391 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:08.700078 systemd[1]: run-netns-cni\x2d79706c8a\x2dc557\x2d6334\x2dc55a\x2ddb4c48b7076b.mount: Deactivated successfully. Apr 14 00:43:08.704003 containerd[1585]: time="2026-04-14T00:43:08.700656049Z" level=info msg="StopPodSandbox for \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\"" Apr 14 00:43:08.704003 containerd[1585]: time="2026-04-14T00:43:08.700823871Z" level=info msg="Ensure that sandbox 95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8 in task-service has been cleanup successfully" Apr 14 00:43:08.715989 kubelet[2695]: I0414 00:43:08.715211 2695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:08.715656 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463-shm.mount: Deactivated successfully. Apr 14 00:43:08.717676 containerd[1585]: time="2026-04-14T00:43:08.716952973Z" level=info msg="StopPodSandbox for \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\"" Apr 14 00:43:08.732702 containerd[1585]: time="2026-04-14T00:43:08.732649292Z" level=info msg="Ensure that sandbox a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222 in task-service has been cleanup successfully" Apr 14 00:43:08.734286 containerd[1585]: time="2026-04-14T00:43:08.733037683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zz9l6,Uid:654da388-0013-4ba6-80c1-00c7d3ddbbd4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.734359 kubelet[2695]: E0414 00:43:08.733443 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.734359 kubelet[2695]: E0414 00:43:08.733554 2695 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zz9l6" Apr 14 00:43:08.734359 kubelet[2695]: E0414 00:43:08.733575 2695 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zz9l6" Apr 14 00:43:08.734468 kubelet[2695]: E0414 00:43:08.733622 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zz9l6_kube-system(654da388-0013-4ba6-80c1-00c7d3ddbbd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zz9l6_kube-system(654da388-0013-4ba6-80c1-00c7d3ddbbd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f50ba43f77d6fff55515853f3f77d1fe350c932618376f5d09f78178f26d463\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zz9l6" podUID="654da388-0013-4ba6-80c1-00c7d3ddbbd4" Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.431 [INFO][3880] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.432 [INFO][3880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" iface="eth0" netns="/var/run/netns/cni-89e67641-b273-f913-f2dc-d2d3838ef930" Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.433 [INFO][3880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" iface="eth0" netns="/var/run/netns/cni-89e67641-b273-f913-f2dc-d2d3838ef930" Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.436 [INFO][3880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" iface="eth0" netns="/var/run/netns/cni-89e67641-b273-f913-f2dc-d2d3838ef930" Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.436 [INFO][3880] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.437 [INFO][3880] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.621 [INFO][3927] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" HandleID="k8s-pod-network.2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" Workload="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.622 [INFO][3927] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.645 [INFO][3927] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.678 [WARNING][3927] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" HandleID="k8s-pod-network.2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" Workload="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.679 [INFO][3927] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" HandleID="k8s-pod-network.2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" Workload="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.709 [INFO][3927] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:08.761049 containerd[1585]: 2026-04-14 00:43:08.746 [INFO][3880] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a" Apr 14 00:43:08.772849 systemd[1]: run-netns-cni\x2d89e67641\x2db273\x2df913\x2df2dc\x2dd2d3838ef930.mount: Deactivated successfully. Apr 14 00:43:08.773056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a-shm.mount: Deactivated successfully. Apr 14 00:43:08.791092 containerd[1585]: time="2026-04-14T00:43:08.790693523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66798d99fc-kp248,Uid:0177e648-4be2-489a-8d4b-4fbf09efab64,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.793787 kubelet[2695]: E0414 00:43:08.792869 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.793787 kubelet[2695]: E0414 00:43:08.792954 2695 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66798d99fc-kp248" Apr 14 00:43:08.793787 kubelet[2695]: E0414 00:43:08.792981 2695 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66798d99fc-kp248" Apr 14 00:43:08.794562 kubelet[2695]: E0414 00:43:08.793096 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66798d99fc-kp248_calico-system(0177e648-4be2-489a-8d4b-4fbf09efab64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66798d99fc-kp248_calico-system(0177e648-4be2-489a-8d4b-4fbf09efab64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d28dae06b8cbfe17e91abb4b4bb0447281fd190b0ba055e15d8f497e678b61a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66798d99fc-kp248" podUID="0177e648-4be2-489a-8d4b-4fbf09efab64" Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.557 [INFO][3902] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.557 [INFO][3902] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" iface="eth0" netns="/var/run/netns/cni-b3436b27-9a42-8115-20f2-fbb8abe9e200" Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.558 [INFO][3902] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" iface="eth0" netns="/var/run/netns/cni-b3436b27-9a42-8115-20f2-fbb8abe9e200" Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.559 [INFO][3902] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" iface="eth0" netns="/var/run/netns/cni-b3436b27-9a42-8115-20f2-fbb8abe9e200" Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.559 [INFO][3902] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.559 [INFO][3902] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.708 [INFO][3943] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" HandleID="k8s-pod-network.b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" Workload="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.709 [INFO][3943] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.735 [INFO][3943] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.781 [WARNING][3943] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" HandleID="k8s-pod-network.b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" Workload="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.781 [INFO][3943] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" HandleID="k8s-pod-network.b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" Workload="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.798 [INFO][3943] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:08.848085 containerd[1585]: 2026-04-14 00:43:08.831 [INFO][3902] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b" Apr 14 00:43:08.883007 containerd[1585]: time="2026-04-14T00:43:08.882844167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6745c75-8x5bs,Uid:7a147b57-8111-4288-bc48-06e9f79fcd93,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.887060 kubelet[2695]: E0414 00:43:08.885190 2695 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 00:43:08.889093 kubelet[2695]: E0414 00:43:08.888722 2695 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84b6745c75-8x5bs" Apr 14 00:43:08.890144 kubelet[2695]: E0414 00:43:08.889085 2695 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84b6745c75-8x5bs" Apr 14 00:43:08.895959 kubelet[2695]: E0414 00:43:08.891944 2695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b6745c75-8x5bs_calico-system(7a147b57-8111-4288-bc48-06e9f79fcd93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b6745c75-8x5bs_calico-system(7a147b57-8111-4288-bc48-06e9f79fcd93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84b6745c75-8x5bs" podUID="7a147b57-8111-4288-bc48-06e9f79fcd93" Apr 14 00:43:09.240110 systemd[1]: run-netns-cni\x2db3436b27\x2d9a42\x2d8115\x2d20f2\x2dfbb8abe9e200.mount: Deactivated successfully. Apr 14 00:43:09.241102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9baefdf93f58b88944d1a46d4b28248945d21b5a6455ed72c242268c5831d2b-shm.mount: Deactivated successfully. Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.170 [INFO][3996] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.170 [INFO][3996] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" iface="eth0" netns="/var/run/netns/cni-2fd14be4-7977-97b1-1d8c-dc00edeb1453" Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.171 [INFO][3996] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" iface="eth0" netns="/var/run/netns/cni-2fd14be4-7977-97b1-1d8c-dc00edeb1453" Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.172 [INFO][3996] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" iface="eth0" netns="/var/run/netns/cni-2fd14be4-7977-97b1-1d8c-dc00edeb1453" Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.172 [INFO][3996] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.172 [INFO][3996] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.390 [INFO][4071] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" HandleID="k8s-pod-network.ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.393 [INFO][4071] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.394 [INFO][4071] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.429 [WARNING][4071] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" HandleID="k8s-pod-network.ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.429 [INFO][4071] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" HandleID="k8s-pod-network.ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.439 [INFO][4071] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:09.511479 containerd[1585]: 2026-04-14 00:43:09.501 [INFO][3996] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:09.511479 containerd[1585]: time="2026-04-14T00:43:09.510846150Z" level=info msg="TearDown network for sandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\" successfully" Apr 14 00:43:09.511479 containerd[1585]: time="2026-04-14T00:43:09.510906815Z" level=info msg="StopPodSandbox for \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\" returns successfully" Apr 14 00:43:09.520848 systemd[1]: run-netns-cni\x2d2fd14be4\x2d7977\x2d97b1\x2d1d8c\x2ddc00edeb1453.mount: Deactivated successfully. Apr 14 00:43:09.526682 containerd[1585]: time="2026-04-14T00:43:09.526385184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6745c75-9h22h,Uid:9b6fd022-3c9d-4e45-8685-b71788e63101,Namespace:calico-system,Attempt:1,}" Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.247 [INFO][4008] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.251 [INFO][4008] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" iface="eth0" netns="/var/run/netns/cni-b7b67ec8-e55c-819e-57e0-60435c61b849" Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.252 [INFO][4008] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" iface="eth0" netns="/var/run/netns/cni-b7b67ec8-e55c-819e-57e0-60435c61b849" Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.254 [INFO][4008] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" iface="eth0" netns="/var/run/netns/cni-b7b67ec8-e55c-819e-57e0-60435c61b849" Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.254 [INFO][4008] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.254 [INFO][4008] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.390 [INFO][4093] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" HandleID="k8s-pod-network.95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.395 [INFO][4093] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.439 [INFO][4093] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.519 [WARNING][4093] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" HandleID="k8s-pod-network.95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.523 [INFO][4093] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" HandleID="k8s-pod-network.95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.536 [INFO][4093] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:09.553704 containerd[1585]: 2026-04-14 00:43:09.551 [INFO][4008] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:09.562033 containerd[1585]: time="2026-04-14T00:43:09.561934104Z" level=info msg="TearDown network for sandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\" successfully" Apr 14 00:43:09.564051 containerd[1585]: time="2026-04-14T00:43:09.562650575Z" level=info msg="StopPodSandbox for \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\" returns successfully" Apr 14 00:43:09.564116 systemd[1]: run-netns-cni\x2db7b67ec8\x2de55c\x2d819e\x2d57e0\x2d60435c61b849.mount: Deactivated successfully. Apr 14 00:43:09.568806 containerd[1585]: time="2026-04-14T00:43:09.566222955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxm2k,Uid:84892692-33db-4109-aafb-76ce1e050199,Namespace:calico-system,Attempt:1,}" Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.174 [INFO][3993] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.174 [INFO][3993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" iface="eth0" netns="/var/run/netns/cni-48c69203-60a2-be6b-f388-8d06f3ac96cd" Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.177 [INFO][3993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" iface="eth0" netns="/var/run/netns/cni-48c69203-60a2-be6b-f388-8d06f3ac96cd" Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.195 [INFO][3993] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" iface="eth0" netns="/var/run/netns/cni-48c69203-60a2-be6b-f388-8d06f3ac96cd" Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.195 [INFO][3993] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.195 [INFO][3993] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.426 [INFO][4079] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" HandleID="k8s-pod-network.6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.426 [INFO][4079] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.542 [INFO][4079] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.564 [WARNING][4079] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" HandleID="k8s-pod-network.6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.564 [INFO][4079] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" HandleID="k8s-pod-network.6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.571 [INFO][4079] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:09.588761 containerd[1585]: 2026-04-14 00:43:09.579 [INFO][3993] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:09.589371 containerd[1585]: time="2026-04-14T00:43:09.589277332Z" level=info msg="TearDown network for sandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\" successfully" Apr 14 00:43:09.589371 containerd[1585]: time="2026-04-14T00:43:09.589310365Z" level=info msg="StopPodSandbox for \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\" returns successfully" Apr 14 00:43:09.591109 containerd[1585]: time="2026-04-14T00:43:09.590898115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-vdfvw,Uid:c0cb2ee9-3046-46eb-8cd5-09888325a08a,Namespace:calico-system,Attempt:1,}" Apr 14 00:43:09.598071 systemd[1]: run-netns-cni\x2d48c69203\x2d60a2\x2dbe6b\x2df388\x2d8d06f3ac96cd.mount: Deactivated successfully. Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.163 [INFO][4029] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.179 [INFO][4029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" iface="eth0" netns="/var/run/netns/cni-8e2d019f-f6f6-9fc8-d07f-4d913be5cac0" Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.191 [INFO][4029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" iface="eth0" netns="/var/run/netns/cni-8e2d019f-f6f6-9fc8-d07f-4d913be5cac0" Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.192 [INFO][4029] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" iface="eth0" netns="/var/run/netns/cni-8e2d019f-f6f6-9fc8-d07f-4d913be5cac0" Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.192 [INFO][4029] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.192 [INFO][4029] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.493 [INFO][4077] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" HandleID="k8s-pod-network.a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.497 [INFO][4077] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.572 [INFO][4077] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.600 [WARNING][4077] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" HandleID="k8s-pod-network.a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.600 [INFO][4077] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" HandleID="k8s-pod-network.a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.609 [INFO][4077] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:09.629672 containerd[1585]: 2026-04-14 00:43:09.623 [INFO][4029] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:09.636823 containerd[1585]: time="2026-04-14T00:43:09.635371035Z" level=info msg="TearDown network for sandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\" successfully" Apr 14 00:43:09.636823 containerd[1585]: time="2026-04-14T00:43:09.635458736Z" level=info msg="StopPodSandbox for \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\" returns successfully" Apr 14 00:43:09.638105 kubelet[2695]: E0414 00:43:09.637592 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:09.648313 containerd[1585]: time="2026-04-14T00:43:09.647599358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kbpl9,Uid:d8b6dcb6-76de-4cca-bc53-2b56358df948,Namespace:kube-system,Attempt:1,}" Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.262 [INFO][4005] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.268 [INFO][4005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" iface="eth0" netns="/var/run/netns/cni-4f677902-c8f6-4207-85bf-1bd05168b1c3" Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.271 [INFO][4005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" iface="eth0" netns="/var/run/netns/cni-4f677902-c8f6-4207-85bf-1bd05168b1c3" Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.272 [INFO][4005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" iface="eth0" netns="/var/run/netns/cni-4f677902-c8f6-4207-85bf-1bd05168b1c3" Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.272 [INFO][4005] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.272 [INFO][4005] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.538 [INFO][4097] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" HandleID="k8s-pod-network.5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Workload="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.554 [INFO][4097] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.610 [INFO][4097] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.676 [WARNING][4097] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" HandleID="k8s-pod-network.5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Workload="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.677 [INFO][4097] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" HandleID="k8s-pod-network.5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Workload="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.690 [INFO][4097] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:09.708749 containerd[1585]: 2026-04-14 00:43:09.698 [INFO][4005] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:09.710647 containerd[1585]: time="2026-04-14T00:43:09.710147148Z" level=info msg="TearDown network for sandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\" successfully" Apr 14 00:43:09.710647 containerd[1585]: time="2026-04-14T00:43:09.710331112Z" level=info msg="StopPodSandbox for \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\" returns successfully" Apr 14 00:43:09.740322 kubelet[2695]: E0414 00:43:09.740267 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:09.755195 containerd[1585]: time="2026-04-14T00:43:09.752315608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zz9l6,Uid:654da388-0013-4ba6-80c1-00c7d3ddbbd4,Namespace:kube-system,Attempt:0,}" Apr 14 00:43:09.755195 containerd[1585]: time="2026-04-14T00:43:09.752377975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6745c75-8x5bs,Uid:7a147b57-8111-4288-bc48-06e9f79fcd93,Namespace:calico-system,Attempt:0,}" Apr 14 00:43:09.755195 containerd[1585]: time="2026-04-14T00:43:09.752875806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66798d99fc-kp248,Uid:0177e648-4be2-489a-8d4b-4fbf09efab64,Namespace:calico-system,Attempt:0,}" Apr 14 00:43:09.818634 kubelet[2695]: I0414 00:43:09.817730 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a72a045a-2130-4394-bddf-040f07946381-nginx-config\") pod \"a72a045a-2130-4394-bddf-040f07946381\" (UID: \"a72a045a-2130-4394-bddf-040f07946381\") " Apr 14 00:43:09.818634 kubelet[2695]: I0414 00:43:09.817800 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a72a045a-2130-4394-bddf-040f07946381-whisker-backend-key-pair\") pod \"a72a045a-2130-4394-bddf-040f07946381\" (UID: \"a72a045a-2130-4394-bddf-040f07946381\") " Apr 14 00:43:09.818634 kubelet[2695]: I0414 00:43:09.817826 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2sh2\" (UniqueName: \"kubernetes.io/projected/a72a045a-2130-4394-bddf-040f07946381-kube-api-access-k2sh2\") pod \"a72a045a-2130-4394-bddf-040f07946381\" (UID: \"a72a045a-2130-4394-bddf-040f07946381\") " Apr 14 00:43:09.818634 kubelet[2695]: I0414 00:43:09.817853 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a72a045a-2130-4394-bddf-040f07946381-whisker-ca-bundle\") pod \"a72a045a-2130-4394-bddf-040f07946381\" (UID: \"a72a045a-2130-4394-bddf-040f07946381\") " Apr 14 00:43:09.827825 kubelet[2695]: I0414 00:43:09.827756 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a72a045a-2130-4394-bddf-040f07946381-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a72a045a-2130-4394-bddf-040f07946381" (UID: "a72a045a-2130-4394-bddf-040f07946381"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 00:43:09.830553 kubelet[2695]: I0414 00:43:09.830408 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a72a045a-2130-4394-bddf-040f07946381-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "a72a045a-2130-4394-bddf-040f07946381" (UID: "a72a045a-2130-4394-bddf-040f07946381"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 00:43:09.837225 kubelet[2695]: I0414 00:43:09.837089 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a72a045a-2130-4394-bddf-040f07946381-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a72a045a-2130-4394-bddf-040f07946381" (UID: "a72a045a-2130-4394-bddf-040f07946381"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 14 00:43:09.839188 kubelet[2695]: I0414 00:43:09.839070 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72a045a-2130-4394-bddf-040f07946381-kube-api-access-k2sh2" (OuterVolumeSpecName: "kube-api-access-k2sh2") pod "a72a045a-2130-4394-bddf-040f07946381" (UID: "a72a045a-2130-4394-bddf-040f07946381"). InnerVolumeSpecName "kube-api-access-k2sh2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 00:43:09.918948 kubelet[2695]: I0414 00:43:09.918912 2695 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a72a045a-2130-4394-bddf-040f07946381-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 14 00:43:09.919343 kubelet[2695]: I0414 00:43:09.919214 2695 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a72a045a-2130-4394-bddf-040f07946381-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 14 00:43:09.919343 kubelet[2695]: I0414 00:43:09.919235 2695 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k2sh2\" (UniqueName: \"kubernetes.io/projected/a72a045a-2130-4394-bddf-040f07946381-kube-api-access-k2sh2\") on node \"localhost\" DevicePath \"\"" Apr 14 00:43:09.919343 kubelet[2695]: I0414 00:43:09.919323 2695 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a72a045a-2130-4394-bddf-040f07946381-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 14 00:43:10.253613 systemd[1]: run-netns-cni\x2d4f677902\x2dc8f6\x2d4207\x2d85bf\x2d1bd05168b1c3.mount: Deactivated successfully. Apr 14 00:43:10.255047 systemd[1]: run-netns-cni\x2d8e2d019f\x2df6f6\x2d9fc8\x2dd07f\x2d4d913be5cac0.mount: Deactivated successfully. Apr 14 00:43:10.255658 systemd[1]: var-lib-kubelet-pods-a72a045a\x2d2130\x2d4394\x2dbddf\x2d040f07946381-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk2sh2.mount: Deactivated successfully. Apr 14 00:43:10.256000 systemd[1]: var-lib-kubelet-pods-a72a045a\x2d2130\x2d4394\x2dbddf\x2d040f07946381-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 14 00:43:10.423215 systemd-networkd[1257]: cali07de914d00b: Link UP Apr 14 00:43:10.492779 systemd-networkd[1257]: cali07de914d00b: Gained carrier Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:09.720 [ERROR][4129] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:09.837 [INFO][4129] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0 calico-apiserver-84b6745c75- calico-system 9b6fd022-3c9d-4e45-8685-b71788e63101 1006 0 2026-04-14 00:42:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84b6745c75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84b6745c75-9h22h eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali07de914d00b [] [] }} ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-9h22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--9h22h-" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:09.838 [INFO][4129] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-9h22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.141 [INFO][4214] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" HandleID="k8s-pod-network.68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.160 [INFO][4214] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" HandleID="k8s-pod-network.68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fd500), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-84b6745c75-9h22h", "timestamp":"2026-04-14 00:43:10.141231095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00068c840)} Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.160 [INFO][4214] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.161 [INFO][4214] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.161 [INFO][4214] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.176 [INFO][4214] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" host="localhost" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.202 [INFO][4214] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.238 [INFO][4214] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.250 [INFO][4214] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.261 [INFO][4214] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.262 [INFO][4214] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" host="localhost" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.273 [INFO][4214] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03 Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.303 [INFO][4214] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" host="localhost" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.331 [INFO][4214] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" host="localhost" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.332 [INFO][4214] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" host="localhost" Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.332 [INFO][4214] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:10.628386 containerd[1585]: 2026-04-14 00:43:10.332 [INFO][4214] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" HandleID="k8s-pod-network.68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:10.630415 containerd[1585]: 2026-04-14 00:43:10.351 [INFO][4129] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-9h22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0", GenerateName:"calico-apiserver-84b6745c75-", Namespace:"calico-system", SelfLink:"", UID:"9b6fd022-3c9d-4e45-8685-b71788e63101", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b6745c75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84b6745c75-9h22h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali07de914d00b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:10.630415 containerd[1585]: 2026-04-14 00:43:10.356 [INFO][4129] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-9h22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:10.630415 containerd[1585]: 2026-04-14 00:43:10.365 [INFO][4129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07de914d00b ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-9h22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:10.630415 containerd[1585]: 2026-04-14 00:43:10.539 [INFO][4129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-9h22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:10.630415 containerd[1585]: 2026-04-14 00:43:10.559 [INFO][4129] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-9h22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0", GenerateName:"calico-apiserver-84b6745c75-", Namespace:"calico-system", SelfLink:"", UID:"9b6fd022-3c9d-4e45-8685-b71788e63101", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b6745c75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03", Pod:"calico-apiserver-84b6745c75-9h22h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali07de914d00b", MAC:"ea:8b:46:d6:df:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:10.630415 containerd[1585]: 2026-04-14 00:43:10.608 [INFO][4129] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-9h22h" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:10.646421 systemd-networkd[1257]: cali67a48a57600: Link UP Apr 14 00:43:10.675438 systemd-networkd[1257]: cali67a48a57600: Gained carrier Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:09.771 [ERROR][4163] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:09.850 [INFO][4163] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0 coredns-674b8bbfcf- kube-system d8b6dcb6-76de-4cca-bc53-2b56358df948 1004 0 2026-04-14 00:42:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-kbpl9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67a48a57600 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbpl9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kbpl9-" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:09.850 [INFO][4163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbpl9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.142 [INFO][4232] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" HandleID="k8s-pod-network.9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.237 [INFO][4232] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" HandleID="k8s-pod-network.9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b16a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-kbpl9", "timestamp":"2026-04-14 00:43:10.142107347 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000426420)} Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.238 [INFO][4232] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.332 [INFO][4232] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.332 [INFO][4232] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.351 [INFO][4232] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" host="localhost" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.374 [INFO][4232] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.396 [INFO][4232] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.413 [INFO][4232] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.432 [INFO][4232] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.433 [INFO][4232] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" host="localhost" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.501 [INFO][4232] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043 Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.560 [INFO][4232] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" host="localhost" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.609 [INFO][4232] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" host="localhost" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.610 [INFO][4232] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" host="localhost" Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.611 [INFO][4232] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:10.817102 containerd[1585]: 2026-04-14 00:43:10.611 [INFO][4232] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" HandleID="k8s-pod-network.9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:10.825997 containerd[1585]: 2026-04-14 00:43:10.622 [INFO][4163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbpl9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d8b6dcb6-76de-4cca-bc53-2b56358df948", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-kbpl9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67a48a57600", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:10.825997 containerd[1585]: 2026-04-14 00:43:10.623 [INFO][4163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbpl9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:10.825997 containerd[1585]: 2026-04-14 00:43:10.623 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67a48a57600 ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbpl9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:10.825997 containerd[1585]: 2026-04-14 00:43:10.687 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbpl9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:10.825997 containerd[1585]: 2026-04-14 00:43:10.699 [INFO][4163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbpl9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d8b6dcb6-76de-4cca-bc53-2b56358df948", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043", Pod:"coredns-674b8bbfcf-kbpl9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67a48a57600", MAC:"52:4d:87:26:03:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:10.825997 containerd[1585]: 2026-04-14 00:43:10.782 [INFO][4163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbpl9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:10.862092 containerd[1585]: time="2026-04-14T00:43:10.860741169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:43:10.862092 containerd[1585]: time="2026-04-14T00:43:10.860814853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:43:10.862092 containerd[1585]: time="2026-04-14T00:43:10.860832902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:10.862092 containerd[1585]: time="2026-04-14T00:43:10.860927236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:11.245858 systemd-networkd[1257]: cali2597c3c5a47: Link UP Apr 14 00:43:11.248736 containerd[1585]: time="2026-04-14T00:43:11.245369452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:43:11.248736 containerd[1585]: time="2026-04-14T00:43:11.245594959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:43:11.248736 containerd[1585]: time="2026-04-14T00:43:11.245640031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:11.248736 containerd[1585]: time="2026-04-14T00:43:11.245836545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:11.262013 systemd-networkd[1257]: cali2597c3c5a47: Gained carrier Apr 14 00:43:11.265715 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:43:11.326594 kubelet[2695]: I0414 00:43:11.326378 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5bd17cef-238c-40d1-8571-f156827ee7bf-whisker-backend-key-pair\") pod \"whisker-699f76cc4f-lckv5\" (UID: \"5bd17cef-238c-40d1-8571-f156827ee7bf\") " pod="calico-system/whisker-699f76cc4f-lckv5" Apr 14 00:43:11.326594 kubelet[2695]: I0414 00:43:11.326469 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bd17cef-238c-40d1-8571-f156827ee7bf-whisker-ca-bundle\") pod \"whisker-699f76cc4f-lckv5\" (UID: \"5bd17cef-238c-40d1-8571-f156827ee7bf\") " pod="calico-system/whisker-699f76cc4f-lckv5" Apr 14 00:43:11.326594 kubelet[2695]: I0414 00:43:11.326567 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5bd17cef-238c-40d1-8571-f156827ee7bf-nginx-config\") pod \"whisker-699f76cc4f-lckv5\" (UID: \"5bd17cef-238c-40d1-8571-f156827ee7bf\") " pod="calico-system/whisker-699f76cc4f-lckv5" Apr 14 00:43:11.326594 kubelet[2695]: I0414 00:43:11.326593 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9r7\" (UniqueName: \"kubernetes.io/projected/5bd17cef-238c-40d1-8571-f156827ee7bf-kube-api-access-zm9r7\") pod \"whisker-699f76cc4f-lckv5\" (UID: \"5bd17cef-238c-40d1-8571-f156827ee7bf\") " pod="calico-system/whisker-699f76cc4f-lckv5" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.057 [ERROR][4195] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.101 [INFO][4195] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0 coredns-674b8bbfcf- kube-system 654da388-0013-4ba6-80c1-00c7d3ddbbd4 982 0 2026-04-14 00:42:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zz9l6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2597c3c5a47 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zz9l6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zz9l6-" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.102 [INFO][4195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zz9l6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.259 [INFO][4266] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" HandleID="k8s-pod-network.24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Workload="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.300 [INFO][4266] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" HandleID="k8s-pod-network.24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Workload="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f80e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zz9l6", "timestamp":"2026-04-14 00:43:10.259949879 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000204000)} Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.300 [INFO][4266] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.611 [INFO][4266] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.615 [INFO][4266] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.667 [INFO][4266] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" host="localhost" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.724 [INFO][4266] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.842 [INFO][4266] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.865 [INFO][4266] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.883 [INFO][4266] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.885 [INFO][4266] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" host="localhost" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:10.917 [INFO][4266] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3 Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:11.056 [INFO][4266] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" host="localhost" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:11.152 [INFO][4266] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" host="localhost" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:11.152 [INFO][4266] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" host="localhost" Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:11.175 [INFO][4266] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:11.558806 containerd[1585]: 2026-04-14 00:43:11.175 [INFO][4266] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" HandleID="k8s-pod-network.24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Workload="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" Apr 14 00:43:11.574712 containerd[1585]: 2026-04-14 00:43:11.215 [INFO][4195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zz9l6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"654da388-0013-4ba6-80c1-00c7d3ddbbd4", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zz9l6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2597c3c5a47", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:11.574712 containerd[1585]: 2026-04-14 00:43:11.215 [INFO][4195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zz9l6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" Apr 14 00:43:11.574712 containerd[1585]: 2026-04-14 00:43:11.215 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2597c3c5a47 ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zz9l6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" Apr 14 00:43:11.574712 containerd[1585]: 2026-04-14 00:43:11.302 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zz9l6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" Apr 14 00:43:11.574712 containerd[1585]: 2026-04-14 00:43:11.334 [INFO][4195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zz9l6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"654da388-0013-4ba6-80c1-00c7d3ddbbd4", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3", Pod:"coredns-674b8bbfcf-zz9l6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2597c3c5a47", MAC:"22:c7:39:63:d8:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:11.574712 containerd[1585]: 2026-04-14 00:43:11.497 [INFO][4195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3" Namespace="kube-system" Pod="coredns-674b8bbfcf-zz9l6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zz9l6-eth0" Apr 14 00:43:11.574712 containerd[1585]: time="2026-04-14T00:43:11.566453969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-699f76cc4f-lckv5,Uid:5bd17cef-238c-40d1-8571-f156827ee7bf,Namespace:calico-system,Attempt:0,}" Apr 14 00:43:11.584954 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:43:11.798241 systemd-networkd[1257]: cali07de914d00b: Gained IPv6LL Apr 14 00:43:11.832215 kubelet[2695]: I0414 00:43:11.829148 2695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72a045a-2130-4394-bddf-040f07946381" path="/var/lib/kubelet/pods/a72a045a-2130-4394-bddf-040f07946381/volumes" Apr 14 00:43:11.861755 containerd[1585]: time="2026-04-14T00:43:11.861697192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6745c75-9h22h,Uid:9b6fd022-3c9d-4e45-8685-b71788e63101,Namespace:calico-system,Attempt:1,} returns sandbox id \"68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03\"" Apr 14 00:43:11.881928 containerd[1585]: time="2026-04-14T00:43:11.880824387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kbpl9,Uid:d8b6dcb6-76de-4cca-bc53-2b56358df948,Namespace:kube-system,Attempt:1,} returns sandbox id \"9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043\"" Apr 14 00:43:11.903275 kubelet[2695]: E0414 00:43:11.902700 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:11.904456 containerd[1585]: time="2026-04-14T00:43:11.904306324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 14 00:43:11.996813 containerd[1585]: time="2026-04-14T00:43:11.996731909Z" level=info msg="CreateContainer within sandbox \"9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 00:43:12.011998 systemd-networkd[1257]: calid3ff7065bfd: Link UP Apr 14 00:43:12.027167 systemd-networkd[1257]: calid3ff7065bfd: Gained carrier Apr 14 00:43:12.055605 containerd[1585]: time="2026-04-14T00:43:12.054981606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:43:12.055605 containerd[1585]: time="2026-04-14T00:43:12.055075349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:43:12.055605 containerd[1585]: time="2026-04-14T00:43:12.055093943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:12.056776 containerd[1585]: time="2026-04-14T00:43:12.056437366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:10.030 [ERROR][4150] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:10.111 [INFO][4150] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nxm2k-eth0 csi-node-driver- calico-system 84892692-33db-4109-aafb-76ce1e050199 1007 0 2026-04-14 00:42:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nxm2k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid3ff7065bfd [] [] }} ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Namespace="calico-system" Pod="csi-node-driver-nxm2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxm2k-" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:10.111 [INFO][4150] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Namespace="calico-system" Pod="csi-node-driver-nxm2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:10.284 [INFO][4274] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" HandleID="k8s-pod-network.178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:10.308 [INFO][4274] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" HandleID="k8s-pod-network.178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdb60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nxm2k", "timestamp":"2026-04-14 00:43:10.284707553 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002d0420)} Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:10.310 [INFO][4274] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.178 [INFO][4274] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.179 [INFO][4274] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.205 [INFO][4274] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" host="localhost" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.278 [INFO][4274] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.511 [INFO][4274] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.541 [INFO][4274] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.567 [INFO][4274] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.569 [INFO][4274] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" host="localhost" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.580 [INFO][4274] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.640 [INFO][4274] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" host="localhost" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.680 [INFO][4274] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" host="localhost" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.704 [INFO][4274] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" host="localhost" Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.704 [INFO][4274] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:12.139605 containerd[1585]: 2026-04-14 00:43:11.704 [INFO][4274] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" HandleID="k8s-pod-network.178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:12.192482 containerd[1585]: 2026-04-14 00:43:11.805 [INFO][4150] cni-plugin/k8s.go 418: Populated endpoint ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Namespace="calico-system" Pod="csi-node-driver-nxm2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxm2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxm2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84892692-33db-4109-aafb-76ce1e050199", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nxm2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid3ff7065bfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:12.192482 containerd[1585]: 2026-04-14 00:43:11.806 [INFO][4150] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Namespace="calico-system" Pod="csi-node-driver-nxm2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:12.192482 containerd[1585]: 2026-04-14 00:43:11.806 [INFO][4150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3ff7065bfd ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Namespace="calico-system" Pod="csi-node-driver-nxm2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:12.192482 containerd[1585]: 2026-04-14 00:43:12.037 [INFO][4150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Namespace="calico-system" Pod="csi-node-driver-nxm2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:12.192482 containerd[1585]: 2026-04-14 00:43:12.041 [INFO][4150] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Namespace="calico-system" Pod="csi-node-driver-nxm2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxm2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxm2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84892692-33db-4109-aafb-76ce1e050199", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce", Pod:"csi-node-driver-nxm2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid3ff7065bfd", MAC:"1a:82:1b:a9:1b:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:12.192482 containerd[1585]: 2026-04-14 00:43:12.128 [INFO][4150] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce" Namespace="calico-system" Pod="csi-node-driver-nxm2k" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:12.250449 containerd[1585]: time="2026-04-14T00:43:12.247216491Z" level=info msg="CreateContainer within sandbox \"9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6463eaed0955c32425d8f607da35801fa32c10992687d713aaa7f47f0fa00afa\"" Apr 14 00:43:12.271047 containerd[1585]: time="2026-04-14T00:43:12.269331274Z" level=info msg="StartContainer for \"6463eaed0955c32425d8f607da35801fa32c10992687d713aaa7f47f0fa00afa\"" Apr 14 00:43:12.272142 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:43:12.311889 systemd-networkd[1257]: calid55b3df500e: Link UP Apr 14 00:43:12.317672 systemd-networkd[1257]: calid55b3df500e: Gained carrier Apr 14 00:43:12.408847 containerd[1585]: time="2026-04-14T00:43:12.403135463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:43:12.408847 containerd[1585]: time="2026-04-14T00:43:12.403370005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:43:12.408847 containerd[1585]: time="2026-04-14T00:43:12.403389466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:12.408847 containerd[1585]: time="2026-04-14T00:43:12.403607000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:12.431084 systemd-networkd[1257]: cali2597c3c5a47: Gained IPv6LL Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:10.052 [ERROR][4141] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:10.173 [INFO][4141] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--vdfvw-eth0 goldmane-5b85766d88- calico-system c0cb2ee9-3046-46eb-8cd5-09888325a08a 1003 0 2026-04-14 00:42:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-vdfvw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid55b3df500e [] [] }} ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Namespace="calico-system" Pod="goldmane-5b85766d88-vdfvw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--vdfvw-" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:10.173 [INFO][4141] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Namespace="calico-system" Pod="goldmane-5b85766d88-vdfvw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:10.321 [INFO][4285] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" HandleID="k8s-pod-network.ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:10.343 [INFO][4285] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" HandleID="k8s-pod-network.ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047c120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-vdfvw", "timestamp":"2026-04-14 00:43:10.321285161 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00039e000)} Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:10.344 [INFO][4285] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:11.707 [INFO][4285] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:11.707 [INFO][4285] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:11.839 [INFO][4285] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" host="localhost" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:11.895 [INFO][4285] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:11.985 [INFO][4285] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:12.033 [INFO][4285] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:12.053 [INFO][4285] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:12.053 [INFO][4285] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" host="localhost" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:12.062 [INFO][4285] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968 Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:12.085 [INFO][4285] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" host="localhost" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:12.190 [INFO][4285] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" host="localhost" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:12.191 [INFO][4285] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" host="localhost" Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:12.194 [INFO][4285] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:12.496644 containerd[1585]: 2026-04-14 00:43:12.210 [INFO][4285] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" HandleID="k8s-pod-network.ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:12.498619 containerd[1585]: 2026-04-14 00:43:12.278 [INFO][4141] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Namespace="calico-system" Pod="goldmane-5b85766d88-vdfvw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--vdfvw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"c0cb2ee9-3046-46eb-8cd5-09888325a08a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-vdfvw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid55b3df500e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:12.498619 containerd[1585]: 2026-04-14 00:43:12.278 [INFO][4141] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Namespace="calico-system" Pod="goldmane-5b85766d88-vdfvw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:12.498619 containerd[1585]: 2026-04-14 00:43:12.278 [INFO][4141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid55b3df500e ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Namespace="calico-system" Pod="goldmane-5b85766d88-vdfvw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:12.498619 containerd[1585]: 2026-04-14 00:43:12.321 [INFO][4141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Namespace="calico-system" Pod="goldmane-5b85766d88-vdfvw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:12.498619 containerd[1585]: 2026-04-14 00:43:12.321 [INFO][4141] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Namespace="calico-system" Pod="goldmane-5b85766d88-vdfvw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--vdfvw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"c0cb2ee9-3046-46eb-8cd5-09888325a08a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968", Pod:"goldmane-5b85766d88-vdfvw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid55b3df500e", MAC:"3a:70:da:65:7e:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:12.498619 containerd[1585]: 2026-04-14 00:43:12.371 [INFO][4141] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968" Namespace="calico-system" Pod="goldmane-5b85766d88-vdfvw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:12.564576 containerd[1585]: time="2026-04-14T00:43:12.562759112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zz9l6,Uid:654da388-0013-4ba6-80c1-00c7d3ddbbd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3\"" Apr 14 00:43:12.568754 kubelet[2695]: E0414 00:43:12.567795 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:12.621067 systemd-networkd[1257]: cali67a48a57600: Gained IPv6LL Apr 14 00:43:12.647158 containerd[1585]: time="2026-04-14T00:43:12.647085075Z" level=info msg="CreateContainer within sandbox \"24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 00:43:12.674618 kernel: calico-node[4346]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 14 00:43:12.711908 systemd-networkd[1257]: cali489c9c570f0: Link UP Apr 14 00:43:12.717778 systemd-networkd[1257]: cali489c9c570f0: Gained carrier Apr 14 00:43:12.825694 containerd[1585]: time="2026-04-14T00:43:12.824334184Z" level=info msg="CreateContainer within sandbox \"24285e196649e8e21044fcacbb09b8806dd604c65077fc8d4b0ca58915c770e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f1d169b4a77c5c5894c077e3acd9e90a054d0b1e36ca9f7dc38582db2e0c661c\"" Apr 14 00:43:12.842817 containerd[1585]: time="2026-04-14T00:43:12.842605669Z" level=info msg="StartContainer for \"f1d169b4a77c5c5894c077e3acd9e90a054d0b1e36ca9f7dc38582db2e0c661c\"" Apr 14 00:43:12.877707 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:10.179 [ERROR][4215] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:10.303 [INFO][4215] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0 calico-apiserver-84b6745c75- calico-system 7a147b57-8111-4288-bc48-06e9f79fcd93 985 0 2026-04-14 00:42:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84b6745c75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84b6745c75-8x5bs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali489c9c570f0 [] [] }} ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-8x5bs" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:10.303 [INFO][4215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-8x5bs" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:10.419 [INFO][4301] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" HandleID="k8s-pod-network.080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Workload="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:10.513 [INFO][4301] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" HandleID="k8s-pod-network.080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Workload="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d62e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-84b6745c75-8x5bs", "timestamp":"2026-04-14 00:43:10.419474929 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004a29a0)} Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:10.513 [INFO][4301] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.193 [INFO][4301] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.193 [INFO][4301] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.216 [INFO][4301] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" host="localhost" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.347 [INFO][4301] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.425 [INFO][4301] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.518 [INFO][4301] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.539 [INFO][4301] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.556 [INFO][4301] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" host="localhost" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.574 [INFO][4301] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.619 [INFO][4301] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" host="localhost" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.685 [INFO][4301] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" host="localhost" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.687 [INFO][4301] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" host="localhost" Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.687 [INFO][4301] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:12.925462 containerd[1585]: 2026-04-14 00:43:12.688 [INFO][4301] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" HandleID="k8s-pod-network.080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Workload="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" Apr 14 00:43:12.929454 containerd[1585]: 2026-04-14 00:43:12.698 [INFO][4215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-8x5bs" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0", GenerateName:"calico-apiserver-84b6745c75-", Namespace:"calico-system", SelfLink:"", UID:"7a147b57-8111-4288-bc48-06e9f79fcd93", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b6745c75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84b6745c75-8x5bs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali489c9c570f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:12.929454 containerd[1585]: 2026-04-14 00:43:12.700 [INFO][4215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-8x5bs" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" Apr 14 00:43:12.929454 containerd[1585]: 2026-04-14 00:43:12.702 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali489c9c570f0 ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-8x5bs" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" Apr 14 00:43:12.929454 containerd[1585]: 2026-04-14 00:43:12.723 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-8x5bs" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" Apr 14 00:43:12.929454 containerd[1585]: 2026-04-14 00:43:12.725 [INFO][4215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-8x5bs" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0", GenerateName:"calico-apiserver-84b6745c75-", Namespace:"calico-system", SelfLink:"", UID:"7a147b57-8111-4288-bc48-06e9f79fcd93", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b6745c75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b", Pod:"calico-apiserver-84b6745c75-8x5bs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali489c9c570f0", MAC:"86:13:02:5f:d5:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:12.929454 containerd[1585]: 2026-04-14 00:43:12.819 [INFO][4215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b" Namespace="calico-system" Pod="calico-apiserver-84b6745c75-8x5bs" WorkloadEndpoint="localhost-k8s-calico--apiserver--84b6745c75--8x5bs-eth0" Apr 14 00:43:13.128214 containerd[1585]: time="2026-04-14T00:43:13.121691022Z" level=info msg="StartContainer for \"6463eaed0955c32425d8f607da35801fa32c10992687d713aaa7f47f0fa00afa\" returns successfully" Apr 14 00:43:13.197637 containerd[1585]: time="2026-04-14T00:43:13.192932438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:43:13.197637 containerd[1585]: time="2026-04-14T00:43:13.192999588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:43:13.197637 containerd[1585]: time="2026-04-14T00:43:13.193014308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:13.197637 containerd[1585]: time="2026-04-14T00:43:13.193196638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:13.265666 containerd[1585]: time="2026-04-14T00:43:13.264058993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxm2k,Uid:84892692-33db-4109-aafb-76ce1e050199,Namespace:calico-system,Attempt:1,} returns sandbox id \"178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce\"" Apr 14 00:43:13.331781 containerd[1585]: time="2026-04-14T00:43:13.316058765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:43:13.331781 containerd[1585]: time="2026-04-14T00:43:13.316180589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:43:13.331781 containerd[1585]: time="2026-04-14T00:43:13.316203023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:13.331781 containerd[1585]: time="2026-04-14T00:43:13.316365444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:13.386369 systemd-networkd[1257]: cali5b6d2757d57: Link UP Apr 14 00:43:13.393819 systemd-networkd[1257]: cali5b6d2757d57: Gained carrier Apr 14 00:43:13.394045 systemd-networkd[1257]: calid3ff7065bfd: Gained IPv6LL Apr 14 00:43:13.510860 systemd[1]: run-containerd-runc-k8s.io-ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968-runc.ENlNZF.mount: Deactivated successfully. Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:10.291 [ERROR][4233] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:10.369 [INFO][4233] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0 calico-kube-controllers-66798d99fc- calico-system 0177e648-4be2-489a-8d4b-4fbf09efab64 983 0 2026-04-14 00:42:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66798d99fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66798d99fc-kp248 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5b6d2757d57 [] [] }} ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Namespace="calico-system" Pod="calico-kube-controllers-66798d99fc-kp248" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:10.369 [INFO][4233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Namespace="calico-system" Pod="calico-kube-controllers-66798d99fc-kp248" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:10.600 [INFO][4316] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" HandleID="k8s-pod-network.695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Workload="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:10.723 [INFO][4316] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" HandleID="k8s-pod-network.695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Workload="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e97d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66798d99fc-kp248", "timestamp":"2026-04-14 00:43:10.600842087 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002fa420)} Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:10.723 [INFO][4316] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:12.693 [INFO][4316] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:12.693 [INFO][4316] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:12.727 [INFO][4316] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" host="localhost" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:12.790 [INFO][4316] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:13.071 [INFO][4316] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:13.103 [INFO][4316] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:13.154 [INFO][4316] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:13.155 [INFO][4316] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" host="localhost" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:13.171 [INFO][4316] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221 Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:13.202 [INFO][4316] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" host="localhost" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:13.272 [INFO][4316] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" host="localhost" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:13.273 [INFO][4316] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" host="localhost" Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:13.273 [INFO][4316] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:13.540304 containerd[1585]: 2026-04-14 00:43:13.326 [INFO][4316] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" HandleID="k8s-pod-network.695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Workload="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" Apr 14 00:43:13.542300 containerd[1585]: 2026-04-14 00:43:13.339 [INFO][4233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Namespace="calico-system" Pod="calico-kube-controllers-66798d99fc-kp248" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0", GenerateName:"calico-kube-controllers-66798d99fc-", Namespace:"calico-system", SelfLink:"", UID:"0177e648-4be2-489a-8d4b-4fbf09efab64", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66798d99fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66798d99fc-kp248", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b6d2757d57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:13.542300 containerd[1585]: 2026-04-14 00:43:13.339 [INFO][4233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Namespace="calico-system" Pod="calico-kube-controllers-66798d99fc-kp248" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" Apr 14 00:43:13.542300 containerd[1585]: 2026-04-14 00:43:13.339 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b6d2757d57 ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Namespace="calico-system" Pod="calico-kube-controllers-66798d99fc-kp248" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" Apr 14 00:43:13.542300 containerd[1585]: 2026-04-14 00:43:13.405 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Namespace="calico-system" Pod="calico-kube-controllers-66798d99fc-kp248" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" Apr 14 00:43:13.542300 containerd[1585]: 2026-04-14 00:43:13.415 [INFO][4233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Namespace="calico-system" Pod="calico-kube-controllers-66798d99fc-kp248" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0", GenerateName:"calico-kube-controllers-66798d99fc-", Namespace:"calico-system", SelfLink:"", UID:"0177e648-4be2-489a-8d4b-4fbf09efab64", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66798d99fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221", Pod:"calico-kube-controllers-66798d99fc-kp248", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b6d2757d57", MAC:"42:4e:f4:22:b8:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:13.542300 containerd[1585]: 2026-04-14 00:43:13.468 [INFO][4233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221" Namespace="calico-system" Pod="calico-kube-controllers-66798d99fc-kp248" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66798d99fc--kp248-eth0" Apr 14 00:43:13.622340 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:43:13.716769 containerd[1585]: time="2026-04-14T00:43:13.714050855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:43:13.716769 containerd[1585]: time="2026-04-14T00:43:13.714205775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:43:13.716769 containerd[1585]: time="2026-04-14T00:43:13.714220861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:13.716769 containerd[1585]: time="2026-04-14T00:43:13.714729418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:13.839866 systemd-networkd[1257]: calid55b3df500e: Gained IPv6LL Apr 14 00:43:13.853177 systemd-networkd[1257]: cali79ae0e6faed: Link UP Apr 14 00:43:13.860635 containerd[1585]: time="2026-04-14T00:43:13.858120976Z" level=info msg="StartContainer for \"f1d169b4a77c5c5894c077e3acd9e90a054d0b1e36ca9f7dc38582db2e0c661c\" returns successfully" Apr 14 00:43:13.896100 systemd-networkd[1257]: cali79ae0e6faed: Gained carrier Apr 14 00:43:13.924228 containerd[1585]: time="2026-04-14T00:43:13.924170024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-vdfvw,Uid:c0cb2ee9-3046-46eb-8cd5-09888325a08a,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968\"" Apr 14 00:43:13.960474 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:12.768 [INFO][4536] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--699f76cc4f--lckv5-eth0 whisker-699f76cc4f- calico-system 5bd17cef-238c-40d1-8571-f156827ee7bf 1039 0 2026-04-14 00:43:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:699f76cc4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-699f76cc4f-lckv5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali79ae0e6faed [] [] }} ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Namespace="calico-system" Pod="whisker-699f76cc4f-lckv5" WorkloadEndpoint="localhost-k8s-whisker--699f76cc4f--lckv5-" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:12.768 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Namespace="calico-system" Pod="whisker-699f76cc4f-lckv5" WorkloadEndpoint="localhost-k8s-whisker--699f76cc4f--lckv5-eth0" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.281 [INFO][4710] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" HandleID="k8s-pod-network.84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Workload="localhost-k8s-whisker--699f76cc4f--lckv5-eth0" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.329 [INFO][4710] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" HandleID="k8s-pod-network.84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Workload="localhost-k8s-whisker--699f76cc4f--lckv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003679a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-699f76cc4f-lckv5", "timestamp":"2026-04-14 00:43:13.281987853 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002a2160)} Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.329 [INFO][4710] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.329 [INFO][4710] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.329 [INFO][4710] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.460 [INFO][4710] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" host="localhost" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.528 [INFO][4710] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.557 [INFO][4710] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.573 [INFO][4710] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.596 [INFO][4710] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.596 [INFO][4710] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" host="localhost" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.608 [INFO][4710] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5 Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.637 [INFO][4710] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" host="localhost" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.680 [INFO][4710] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" host="localhost" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.680 [INFO][4710] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" host="localhost" Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.680 [INFO][4710] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:13.994483 containerd[1585]: 2026-04-14 00:43:13.680 [INFO][4710] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" HandleID="k8s-pod-network.84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Workload="localhost-k8s-whisker--699f76cc4f--lckv5-eth0" Apr 14 00:43:14.005306 containerd[1585]: 2026-04-14 00:43:13.694 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Namespace="calico-system" Pod="whisker-699f76cc4f-lckv5" WorkloadEndpoint="localhost-k8s-whisker--699f76cc4f--lckv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--699f76cc4f--lckv5-eth0", GenerateName:"whisker-699f76cc4f-", Namespace:"calico-system", SelfLink:"", UID:"5bd17cef-238c-40d1-8571-f156827ee7bf", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 43, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"699f76cc4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-699f76cc4f-lckv5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali79ae0e6faed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:14.005306 containerd[1585]: 2026-04-14 00:43:13.694 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Namespace="calico-system" Pod="whisker-699f76cc4f-lckv5" WorkloadEndpoint="localhost-k8s-whisker--699f76cc4f--lckv5-eth0" Apr 14 00:43:14.005306 containerd[1585]: 2026-04-14 00:43:13.694 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79ae0e6faed ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Namespace="calico-system" Pod="whisker-699f76cc4f-lckv5" WorkloadEndpoint="localhost-k8s-whisker--699f76cc4f--lckv5-eth0" Apr 14 00:43:14.005306 containerd[1585]: 2026-04-14 00:43:13.893 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Namespace="calico-system" Pod="whisker-699f76cc4f-lckv5" WorkloadEndpoint="localhost-k8s-whisker--699f76cc4f--lckv5-eth0" Apr 14 00:43:14.005306 containerd[1585]: 2026-04-14 00:43:13.900 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Namespace="calico-system" Pod="whisker-699f76cc4f-lckv5" WorkloadEndpoint="localhost-k8s-whisker--699f76cc4f--lckv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--699f76cc4f--lckv5-eth0", GenerateName:"whisker-699f76cc4f-", Namespace:"calico-system", SelfLink:"", UID:"5bd17cef-238c-40d1-8571-f156827ee7bf", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 43, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"699f76cc4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5", Pod:"whisker-699f76cc4f-lckv5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali79ae0e6faed", MAC:"06:6a:ef:38:cf:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:14.005306 containerd[1585]: 2026-04-14 00:43:13.969 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5" Namespace="calico-system" Pod="whisker-699f76cc4f-lckv5" WorkloadEndpoint="localhost-k8s-whisker--699f76cc4f--lckv5-eth0" Apr 14 00:43:14.013857 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:43:14.041046 systemd-journald[1170]: Under memory pressure, flushing caches. Apr 14 00:43:14.030059 systemd-resolved[1474]: Under memory pressure, flushing caches. Apr 14 00:43:14.030149 systemd-resolved[1474]: Flushed all caches. Apr 14 00:43:14.182628 kubelet[2695]: E0414 00:43:14.181071 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:14.247219 kubelet[2695]: E0414 00:43:14.246796 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:14.269591 kubelet[2695]: I0414 00:43:14.269329 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kbpl9" podStartSLOduration=53.269191766 podStartE2EDuration="53.269191766s" podCreationTimestamp="2026-04-14 00:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:43:14.261061036 +0000 UTC m=+56.788063351" watchObservedRunningTime="2026-04-14 00:43:14.269191766 +0000 UTC m=+56.796194096" Apr 14 00:43:14.290348 systemd-networkd[1257]: cali489c9c570f0: Gained IPv6LL Apr 14 00:43:14.306680 containerd[1585]: time="2026-04-14T00:43:14.292949698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66798d99fc-kp248,Uid:0177e648-4be2-489a-8d4b-4fbf09efab64,Namespace:calico-system,Attempt:0,} returns sandbox id \"695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221\"" Apr 14 00:43:14.316115 containerd[1585]: time="2026-04-14T00:43:14.311472470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 00:43:14.316115 containerd[1585]: time="2026-04-14T00:43:14.311912531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 00:43:14.316115 containerd[1585]: time="2026-04-14T00:43:14.312783216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:14.316115 containerd[1585]: time="2026-04-14T00:43:14.314229404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 00:43:14.336583 kubelet[2695]: I0414 00:43:14.334433 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zz9l6" podStartSLOduration=53.334224668 podStartE2EDuration="53.334224668s" podCreationTimestamp="2026-04-14 00:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 00:43:14.329979526 +0000 UTC m=+56.856981839" watchObservedRunningTime="2026-04-14 00:43:14.334224668 +0000 UTC m=+56.861227000" Apr 14 00:43:14.359862 containerd[1585]: time="2026-04-14T00:43:14.357923113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6745c75-8x5bs,Uid:7a147b57-8111-4288-bc48-06e9f79fcd93,Namespace:calico-system,Attempt:0,} returns sandbox id \"080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b\"" Apr 14 00:43:14.571310 systemd[1]: run-containerd-runc-k8s.io-84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5-runc.0Qvw4s.mount: Deactivated successfully. Apr 14 00:43:14.654630 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 00:43:14.736808 systemd-networkd[1257]: cali5b6d2757d57: Gained IPv6LL Apr 14 00:43:14.804008 containerd[1585]: time="2026-04-14T00:43:14.803775491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-699f76cc4f-lckv5,Uid:5bd17cef-238c-40d1-8571-f156827ee7bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5\"" Apr 14 00:43:14.952070 systemd-networkd[1257]: vxlan.calico: Link UP Apr 14 00:43:14.952077 systemd-networkd[1257]: vxlan.calico: Gained carrier Apr 14 00:43:14.990785 systemd-networkd[1257]: cali79ae0e6faed: Gained IPv6LL Apr 14 00:43:15.307837 kubelet[2695]: E0414 00:43:15.304170 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:15.307837 kubelet[2695]: E0414 00:43:15.306434 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:16.108665 systemd-journald[1170]: Under memory pressure, flushing caches. Apr 14 00:43:16.108579 systemd-resolved[1474]: Under memory pressure, flushing caches. Apr 14 00:43:16.108587 systemd-resolved[1474]: Flushed all caches. Apr 14 00:43:16.311415 kubelet[2695]: E0414 00:43:16.311232 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:16.397789 systemd-networkd[1257]: vxlan.calico: Gained IPv6LL Apr 14 00:43:17.326230 kubelet[2695]: E0414 00:43:17.325749 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:17.793991 containerd[1585]: time="2026-04-14T00:43:17.793904677Z" level=info msg="StopPodSandbox for \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\"" Apr 14 00:43:17.925767 containerd[1585]: time="2026-04-14T00:43:17.925320489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:17.928708 containerd[1585]: time="2026-04-14T00:43:17.927839898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 14 00:43:17.934421 containerd[1585]: time="2026-04-14T00:43:17.934143908Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:17.957587 containerd[1585]: time="2026-04-14T00:43:17.956039727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:17.970599 containerd[1585]: time="2026-04-14T00:43:17.970254287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 6.062610351s" Apr 14 00:43:17.970599 containerd[1585]: time="2026-04-14T00:43:17.970374776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 14 00:43:17.985996 containerd[1585]: time="2026-04-14T00:43:17.983198373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 14 00:43:17.994623 containerd[1585]: time="2026-04-14T00:43:17.994552407Z" level=info msg="CreateContainer within sandbox \"68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 14 00:43:18.047090 containerd[1585]: time="2026-04-14T00:43:18.046664507Z" level=info msg="CreateContainer within sandbox \"68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4ed7a88bcfd6363dc8483833c0d260dbe2bd770f984fe924da96954132486dac\"" Apr 14 00:43:18.049367 containerd[1585]: time="2026-04-14T00:43:18.049083629Z" level=info msg="StartContainer for \"4ed7a88bcfd6363dc8483833c0d260dbe2bd770f984fe924da96954132486dac\"" Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:17.937 [WARNING][5058] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxm2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84892692-33db-4109-aafb-76ce1e050199", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce", Pod:"csi-node-driver-nxm2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid3ff7065bfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:17.937 [INFO][5058] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:17.937 [INFO][5058] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" iface="eth0" netns="" Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:17.937 [INFO][5058] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:17.937 [INFO][5058] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:18.034 [INFO][5074] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" HandleID="k8s-pod-network.95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:18.035 [INFO][5074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:18.035 [INFO][5074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:18.067 [WARNING][5074] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" HandleID="k8s-pod-network.95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:18.074 [INFO][5074] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" HandleID="k8s-pod-network.95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:18.101 [INFO][5074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:18.124727 containerd[1585]: 2026-04-14 00:43:18.110 [INFO][5058] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:18.124727 containerd[1585]: time="2026-04-14T00:43:18.124722991Z" level=info msg="TearDown network for sandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\" successfully" Apr 14 00:43:18.127447 containerd[1585]: time="2026-04-14T00:43:18.124763375Z" level=info msg="StopPodSandbox for \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\" returns successfully" Apr 14 00:43:18.127447 containerd[1585]: time="2026-04-14T00:43:18.126809749Z" level=info msg="RemovePodSandbox for \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\"" Apr 14 00:43:18.134239 containerd[1585]: time="2026-04-14T00:43:18.134088891Z" level=info msg="Forcibly stopping sandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\"" Apr 14 00:43:18.329117 containerd[1585]: time="2026-04-14T00:43:18.328899714Z" level=info msg="StartContainer for \"4ed7a88bcfd6363dc8483833c0d260dbe2bd770f984fe924da96954132486dac\" returns successfully" Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.305 [WARNING][5108] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxm2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84892692-33db-4109-aafb-76ce1e050199", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce", Pod:"csi-node-driver-nxm2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid3ff7065bfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.306 [INFO][5108] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.306 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" iface="eth0" netns="" Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.306 [INFO][5108] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.306 [INFO][5108] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.493 [INFO][5128] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" HandleID="k8s-pod-network.95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.494 [INFO][5128] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.494 [INFO][5128] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.531 [WARNING][5128] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" HandleID="k8s-pod-network.95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.531 [INFO][5128] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" HandleID="k8s-pod-network.95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Workload="localhost-k8s-csi--node--driver--nxm2k-eth0" Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.550 [INFO][5128] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:18.562656 containerd[1585]: 2026-04-14 00:43:18.558 [INFO][5108] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8" Apr 14 00:43:18.562656 containerd[1585]: time="2026-04-14T00:43:18.562380044Z" level=info msg="TearDown network for sandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\" successfully" Apr 14 00:43:18.701180 containerd[1585]: time="2026-04-14T00:43:18.700144712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 00:43:18.701180 containerd[1585]: time="2026-04-14T00:43:18.700251077Z" level=info msg="RemovePodSandbox \"95ef47b2da438b453fc9b0f4a8f9647c532761c62a35457220781a199463a8e8\" returns successfully" Apr 14 00:43:18.709186 containerd[1585]: time="2026-04-14T00:43:18.709039319Z" level=info msg="StopPodSandbox for \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\"" Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:18.960 [WARNING][5157] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--vdfvw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"c0cb2ee9-3046-46eb-8cd5-09888325a08a", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968", Pod:"goldmane-5b85766d88-vdfvw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid55b3df500e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:18.965 [INFO][5157] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:18.972 [INFO][5157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" iface="eth0" netns="" Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:18.973 [INFO][5157] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:18.973 [INFO][5157] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:19.113 [INFO][5165] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" HandleID="k8s-pod-network.6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:19.116 [INFO][5165] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:19.116 [INFO][5165] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:19.153 [WARNING][5165] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" HandleID="k8s-pod-network.6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:19.153 [INFO][5165] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" HandleID="k8s-pod-network.6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:19.163 [INFO][5165] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:19.184130 containerd[1585]: 2026-04-14 00:43:19.173 [INFO][5157] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:19.184130 containerd[1585]: time="2026-04-14T00:43:19.180906707Z" level=info msg="TearDown network for sandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\" successfully" Apr 14 00:43:19.184130 containerd[1585]: time="2026-04-14T00:43:19.180950822Z" level=info msg="StopPodSandbox for \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\" returns successfully" Apr 14 00:43:19.191939 containerd[1585]: time="2026-04-14T00:43:19.191039919Z" level=info msg="RemovePodSandbox for \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\"" Apr 14 00:43:19.197574 containerd[1585]: time="2026-04-14T00:43:19.192749764Z" level=info msg="Forcibly stopping sandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\"" Apr 14 00:43:19.208122 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:49200.service - OpenSSH per-connection server daemon (10.0.0.1:49200). Apr 14 00:43:19.436248 kubelet[2695]: I0414 00:43:19.433127 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-84b6745c75-9h22h" podStartSLOduration=38.334246639 podStartE2EDuration="44.433101585s" podCreationTimestamp="2026-04-14 00:42:35 +0000 UTC" firstStartedPulling="2026-04-14 00:43:11.874445288 +0000 UTC m=+54.401447602" lastFinishedPulling="2026-04-14 00:43:17.973300236 +0000 UTC m=+60.500302548" observedRunningTime="2026-04-14 00:43:19.43113127 +0000 UTC m=+61.958133590" watchObservedRunningTime="2026-04-14 00:43:19.433101585 +0000 UTC m=+61.960103906" Apr 14 00:43:19.451557 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 49200 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:43:19.453357 sshd[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:19.491689 systemd-logind[1566]: New session 8 of user core. Apr 14 00:43:19.497167 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.493 [WARNING][5191] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--vdfvw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"c0cb2ee9-3046-46eb-8cd5-09888325a08a", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968", Pod:"goldmane-5b85766d88-vdfvw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid55b3df500e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.495 [INFO][5191] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.495 [INFO][5191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" iface="eth0" netns="" Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.495 [INFO][5191] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.495 [INFO][5191] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.565 [INFO][5202] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" HandleID="k8s-pod-network.6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.566 [INFO][5202] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.566 [INFO][5202] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.618 [WARNING][5202] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" HandleID="k8s-pod-network.6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.619 [INFO][5202] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" HandleID="k8s-pod-network.6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Workload="localhost-k8s-goldmane--5b85766d88--vdfvw-eth0" Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.637 [INFO][5202] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:19.696019 containerd[1585]: 2026-04-14 00:43:19.690 [INFO][5191] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc" Apr 14 00:43:19.696019 containerd[1585]: time="2026-04-14T00:43:19.695823492Z" level=info msg="TearDown network for sandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\" successfully" Apr 14 00:43:19.732796 containerd[1585]: time="2026-04-14T00:43:19.732644817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 00:43:19.734143 containerd[1585]: time="2026-04-14T00:43:19.733789914Z" level=info msg="RemovePodSandbox \"6493e7234b34139019da1cdb7bc3b883f9dfac190a5d31b2f282dac6229f70cc\" returns successfully" Apr 14 00:43:19.735878 containerd[1585]: time="2026-04-14T00:43:19.735675608Z" level=info msg="StopPodSandbox for \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\"" Apr 14 00:43:20.159949 sshd[5185]: pam_unix(sshd:session): session closed for user core Apr 14 00:43:20.167950 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Apr 14 00:43:20.169005 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:49200.service: Deactivated successfully. Apr 14 00:43:20.179466 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 00:43:20.187259 systemd-logind[1566]: Removed session 8. Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.020 [WARNING][5230] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d8b6dcb6-76de-4cca-bc53-2b56358df948", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043", Pod:"coredns-674b8bbfcf-kbpl9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67a48a57600", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.020 [INFO][5230] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.020 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" iface="eth0" netns="" Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.020 [INFO][5230] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.020 [INFO][5230] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.161 [INFO][5244] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" HandleID="k8s-pod-network.a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.162 [INFO][5244] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.163 [INFO][5244] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.189 [WARNING][5244] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" HandleID="k8s-pod-network.a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.190 [INFO][5244] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" HandleID="k8s-pod-network.a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.201 [INFO][5244] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:20.214124 containerd[1585]: 2026-04-14 00:43:20.206 [INFO][5230] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:20.215690 containerd[1585]: time="2026-04-14T00:43:20.214159814Z" level=info msg="TearDown network for sandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\" successfully" Apr 14 00:43:20.215690 containerd[1585]: time="2026-04-14T00:43:20.214205993Z" level=info msg="StopPodSandbox for \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\" returns successfully" Apr 14 00:43:20.217220 containerd[1585]: time="2026-04-14T00:43:20.216892826Z" level=info msg="RemovePodSandbox for \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\"" Apr 14 00:43:20.217220 containerd[1585]: time="2026-04-14T00:43:20.217048277Z" level=info msg="Forcibly stopping sandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\"" Apr 14 00:43:20.291857 containerd[1585]: time="2026-04-14T00:43:20.290874625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:20.293389 containerd[1585]: time="2026-04-14T00:43:20.293146859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 14 00:43:20.297268 containerd[1585]: time="2026-04-14T00:43:20.297167476Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:20.396064 containerd[1585]: time="2026-04-14T00:43:20.396010855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:20.398131 containerd[1585]: time="2026-04-14T00:43:20.398040780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.414784244s" Apr 14 00:43:20.398131 containerd[1585]: time="2026-04-14T00:43:20.398126677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 14 00:43:20.410670 kubelet[2695]: I0414 00:43:20.404235 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 00:43:20.410835 containerd[1585]: time="2026-04-14T00:43:20.408678033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 14 00:43:20.423217 containerd[1585]: time="2026-04-14T00:43:20.423096123Z" level=info msg="CreateContainer within sandbox \"178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 14 00:43:20.475179 containerd[1585]: time="2026-04-14T00:43:20.474952787Z" level=info msg="CreateContainer within sandbox \"178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eeb6550e3d35c7e2766ee84cf9151c52f9ce8d301989425e9e45f67657d7811f\"" Apr 14 00:43:20.483582 containerd[1585]: time="2026-04-14T00:43:20.482936592Z" level=info msg="StartContainer for \"eeb6550e3d35c7e2766ee84cf9151c52f9ce8d301989425e9e45f67657d7811f\"" Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.451 [WARNING][5267] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d8b6dcb6-76de-4cca-bc53-2b56358df948", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fb379988d7dfacc8174b0e61f609c9fc5932964dbb68a5290331e0325e8c043", Pod:"coredns-674b8bbfcf-kbpl9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67a48a57600", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.452 [INFO][5267] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.453 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" iface="eth0" netns="" Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.453 [INFO][5267] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.453 [INFO][5267] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.603 [INFO][5276] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" HandleID="k8s-pod-network.a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.604 [INFO][5276] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.604 [INFO][5276] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.639 [WARNING][5276] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" HandleID="k8s-pod-network.a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.639 [INFO][5276] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" HandleID="k8s-pod-network.a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Workload="localhost-k8s-coredns--674b8bbfcf--kbpl9-eth0" Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.662 [INFO][5276] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:20.714911 containerd[1585]: 2026-04-14 00:43:20.705 [INFO][5267] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222" Apr 14 00:43:20.716673 containerd[1585]: time="2026-04-14T00:43:20.714942153Z" level=info msg="TearDown network for sandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\" successfully" Apr 14 00:43:20.822131 containerd[1585]: time="2026-04-14T00:43:20.821415426Z" level=info msg="StartContainer for \"eeb6550e3d35c7e2766ee84cf9151c52f9ce8d301989425e9e45f67657d7811f\" returns successfully" Apr 14 00:43:20.861653 containerd[1585]: time="2026-04-14T00:43:20.860785707Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 00:43:20.861653 containerd[1585]: time="2026-04-14T00:43:20.860984683Z" level=info msg="RemovePodSandbox \"a469e7eb897b1241cb46f5381aca943d0e794632ea2354974721a062d5976222\" returns successfully" Apr 14 00:43:20.866949 containerd[1585]: time="2026-04-14T00:43:20.866266363Z" level=info msg="StopPodSandbox for \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\"" Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.041 [WARNING][5327] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0", GenerateName:"calico-apiserver-84b6745c75-", Namespace:"calico-system", SelfLink:"", UID:"9b6fd022-3c9d-4e45-8685-b71788e63101", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b6745c75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03", Pod:"calico-apiserver-84b6745c75-9h22h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali07de914d00b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.048 [INFO][5327] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.052 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" iface="eth0" netns="" Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.059 [INFO][5327] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.060 [INFO][5327] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.215 [INFO][5336] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" HandleID="k8s-pod-network.ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.216 [INFO][5336] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.216 [INFO][5336] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.244 [WARNING][5336] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" HandleID="k8s-pod-network.ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.245 [INFO][5336] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" HandleID="k8s-pod-network.ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.250 [INFO][5336] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:21.259060 containerd[1585]: 2026-04-14 00:43:21.255 [INFO][5327] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:21.259060 containerd[1585]: time="2026-04-14T00:43:21.259037437Z" level=info msg="TearDown network for sandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\" successfully" Apr 14 00:43:21.259906 containerd[1585]: time="2026-04-14T00:43:21.259089419Z" level=info msg="StopPodSandbox for \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\" returns successfully" Apr 14 00:43:21.260112 containerd[1585]: time="2026-04-14T00:43:21.260064992Z" level=info msg="RemovePodSandbox for \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\"" Apr 14 00:43:21.260178 containerd[1585]: time="2026-04-14T00:43:21.260126189Z" level=info msg="Forcibly stopping sandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\"" Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.427 [WARNING][5364] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0", GenerateName:"calico-apiserver-84b6745c75-", Namespace:"calico-system", SelfLink:"", UID:"9b6fd022-3c9d-4e45-8685-b71788e63101", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 0, 42, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b6745c75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68bc1a20c9b32a03154267bab465773c490e33b2cd77668e53ced0798f371c03", Pod:"calico-apiserver-84b6745c75-9h22h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali07de914d00b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.429 [INFO][5364] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.429 [INFO][5364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" iface="eth0" netns="" Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.429 [INFO][5364] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.430 [INFO][5364] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.553 [INFO][5373] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" HandleID="k8s-pod-network.ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.553 [INFO][5373] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.553 [INFO][5373] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.576 [WARNING][5373] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" HandleID="k8s-pod-network.ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.577 [INFO][5373] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" HandleID="k8s-pod-network.ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Workload="localhost-k8s-calico--apiserver--84b6745c75--9h22h-eth0" Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.584 [INFO][5373] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:21.596123 containerd[1585]: 2026-04-14 00:43:21.589 [INFO][5364] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f" Apr 14 00:43:21.596123 containerd[1585]: time="2026-04-14T00:43:21.594823823Z" level=info msg="TearDown network for sandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\" successfully" Apr 14 00:43:21.601572 containerd[1585]: time="2026-04-14T00:43:21.601374487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 00:43:21.601572 containerd[1585]: time="2026-04-14T00:43:21.601564223Z" level=info msg="RemovePodSandbox \"ece4f9d07cf969a86b4415a009978d48b8476f62692f371c3786c152fdb1ea0f\" returns successfully" Apr 14 00:43:21.604252 containerd[1585]: time="2026-04-14T00:43:21.603153257Z" level=info msg="StopPodSandbox for \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\"" Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:21.772 [WARNING][5392] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" WorkloadEndpoint="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:21.773 [INFO][5392] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:21.774 [INFO][5392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" iface="eth0" netns="" Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:21.774 [INFO][5392] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:21.774 [INFO][5392] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:21.963 [INFO][5400] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" HandleID="k8s-pod-network.5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Workload="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:21.964 [INFO][5400] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:21.965 [INFO][5400] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:21.987 [WARNING][5400] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" HandleID="k8s-pod-network.5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Workload="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:21.988 [INFO][5400] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" HandleID="k8s-pod-network.5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Workload="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:22.004 [INFO][5400] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:22.027683 containerd[1585]: 2026-04-14 00:43:22.013 [INFO][5392] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:22.027683 containerd[1585]: time="2026-04-14T00:43:22.027106830Z" level=info msg="TearDown network for sandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\" successfully" Apr 14 00:43:22.027683 containerd[1585]: time="2026-04-14T00:43:22.027177124Z" level=info msg="StopPodSandbox for \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\" returns successfully" Apr 14 00:43:22.030598 containerd[1585]: time="2026-04-14T00:43:22.028483036Z" level=info msg="RemovePodSandbox for \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\"" Apr 14 00:43:22.030598 containerd[1585]: time="2026-04-14T00:43:22.028593997Z" level=info msg="Forcibly stopping sandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\"" Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.422 [WARNING][5417] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" WorkloadEndpoint="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.422 [INFO][5417] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.422 [INFO][5417] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" iface="eth0" netns="" Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.422 [INFO][5417] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.422 [INFO][5417] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.614 [INFO][5430] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" HandleID="k8s-pod-network.5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Workload="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.615 [INFO][5430] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.615 [INFO][5430] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.644 [WARNING][5430] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" HandleID="k8s-pod-network.5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Workload="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.644 [INFO][5430] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" HandleID="k8s-pod-network.5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Workload="localhost-k8s-whisker--596d75bdd--w8wwn-eth0" Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.669 [INFO][5430] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 00:43:22.687657 containerd[1585]: 2026-04-14 00:43:22.679 [INFO][5417] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701" Apr 14 00:43:22.687657 containerd[1585]: time="2026-04-14T00:43:22.687003308Z" level=info msg="TearDown network for sandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\" successfully" Apr 14 00:43:22.699891 containerd[1585]: time="2026-04-14T00:43:22.699834931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 00:43:22.701720 containerd[1585]: time="2026-04-14T00:43:22.701666154Z" level=info msg="RemovePodSandbox \"5e272f8b456eb3b4fe425ebadbea6223e66a8b79218d77538a9a48cba947a701\" returns successfully" Apr 14 00:43:24.627926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985764678.mount: Deactivated successfully. Apr 14 00:43:25.208763 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:47446.service - OpenSSH per-connection server daemon (10.0.0.1:47446). Apr 14 00:43:25.300343 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 47446 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:43:25.306395 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:25.323379 systemd-logind[1566]: New session 9 of user core. Apr 14 00:43:25.337430 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 00:43:25.882775 sshd[5444]: pam_unix(sshd:session): session closed for user core Apr 14 00:43:25.993065 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Apr 14 00:43:26.020362 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:47446.service: Deactivated successfully. Apr 14 00:43:26.048035 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 00:43:26.053809 systemd-logind[1566]: Removed session 9. Apr 14 00:43:26.567210 containerd[1585]: time="2026-04-14T00:43:26.567040685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:26.620482 containerd[1585]: time="2026-04-14T00:43:26.619999372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 14 00:43:26.628434 containerd[1585]: time="2026-04-14T00:43:26.624129066Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:26.633396 containerd[1585]: time="2026-04-14T00:43:26.632586395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 6.223766195s" Apr 14 00:43:26.633396 containerd[1585]: time="2026-04-14T00:43:26.632777849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 14 00:43:26.633396 containerd[1585]: time="2026-04-14T00:43:26.633191654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:26.638701 containerd[1585]: time="2026-04-14T00:43:26.636544983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 14 00:43:26.652120 containerd[1585]: time="2026-04-14T00:43:26.651998700Z" level=info msg="CreateContainer within sandbox \"ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 14 00:43:26.720893 containerd[1585]: time="2026-04-14T00:43:26.720379827Z" level=info msg="CreateContainer within sandbox \"ba10b5a26a7b81a01947019f6b372ba276dc797464dbd5086324f21645e3d968\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bb04ba5a448d04e41aa348c66fe8fa74fc7d43cbc19e1d2512e797dad52bfcd9\"" Apr 14 00:43:26.725437 containerd[1585]: time="2026-04-14T00:43:26.725146720Z" level=info msg="StartContainer for \"bb04ba5a448d04e41aa348c66fe8fa74fc7d43cbc19e1d2512e797dad52bfcd9\"" Apr 14 00:43:27.280556 containerd[1585]: time="2026-04-14T00:43:27.280155362Z" level=info msg="StartContainer for \"bb04ba5a448d04e41aa348c66fe8fa74fc7d43cbc19e1d2512e797dad52bfcd9\" returns successfully" Apr 14 00:43:30.813808 kubelet[2695]: E0414 00:43:30.813683 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:30.927884 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:47460.service - OpenSSH per-connection server daemon (10.0.0.1:47460). Apr 14 00:43:31.106392 sshd[5592]: Accepted publickey for core from 10.0.0.1 port 47460 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:43:31.112361 sshd[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:31.137903 systemd-logind[1566]: New session 10 of user core. Apr 14 00:43:31.152892 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 00:43:32.214956 sshd[5592]: pam_unix(sshd:session): session closed for user core Apr 14 00:43:32.244914 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:47460.service: Deactivated successfully. Apr 14 00:43:32.261077 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Apr 14 00:43:32.261804 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 00:43:32.281951 systemd-logind[1566]: Removed session 10. Apr 14 00:43:32.616736 systemd[1]: run-containerd-runc-k8s.io-bb04ba5a448d04e41aa348c66fe8fa74fc7d43cbc19e1d2512e797dad52bfcd9-runc.gOQctS.mount: Deactivated successfully. Apr 14 00:43:33.636765 containerd[1585]: time="2026-04-14T00:43:33.634427131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:33.639592 containerd[1585]: time="2026-04-14T00:43:33.639409401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 14 00:43:33.655390 containerd[1585]: time="2026-04-14T00:43:33.655282603Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:33.678595 containerd[1585]: time="2026-04-14T00:43:33.678344336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:33.685467 containerd[1585]: time="2026-04-14T00:43:33.685215235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 7.048571783s" Apr 14 00:43:33.685467 containerd[1585]: time="2026-04-14T00:43:33.685293200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 14 00:43:33.687643 containerd[1585]: time="2026-04-14T00:43:33.687600688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 14 00:43:33.840103 containerd[1585]: time="2026-04-14T00:43:33.834698198Z" level=info msg="CreateContainer within sandbox \"695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 14 00:43:33.927718 containerd[1585]: time="2026-04-14T00:43:33.927203745Z" level=info msg="CreateContainer within sandbox \"695dae7f9a1ec82f030bc3492efdc3adf820da0ef83e277ec3e00818c93ea221\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"37e8fcf64d604a7ece7b26a879ec498808e6c6e18cc92e63efc7b986c4b93485\"" Apr 14 00:43:33.966580 containerd[1585]: time="2026-04-14T00:43:33.966411406Z" level=info msg="StartContainer for \"37e8fcf64d604a7ece7b26a879ec498808e6c6e18cc92e63efc7b986c4b93485\"" Apr 14 00:43:34.002013 systemd-resolved[1474]: Under memory pressure, flushing caches. Apr 14 00:43:34.003968 systemd-journald[1170]: Under memory pressure, flushing caches. Apr 14 00:43:34.002053 systemd-resolved[1474]: Flushed all caches. Apr 14 00:43:34.349108 containerd[1585]: time="2026-04-14T00:43:34.347039691Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:34.353361 containerd[1585]: time="2026-04-14T00:43:34.352440503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 14 00:43:34.362293 containerd[1585]: time="2026-04-14T00:43:34.361851493Z" level=info msg="StartContainer for \"37e8fcf64d604a7ece7b26a879ec498808e6c6e18cc92e63efc7b986c4b93485\" returns successfully" Apr 14 00:43:34.403917 containerd[1585]: time="2026-04-14T00:43:34.403156688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 715.455817ms" Apr 14 00:43:34.405106 containerd[1585]: time="2026-04-14T00:43:34.404306647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 14 00:43:34.411880 containerd[1585]: time="2026-04-14T00:43:34.411156229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 14 00:43:34.426988 containerd[1585]: time="2026-04-14T00:43:34.426355867Z" level=info msg="CreateContainer within sandbox \"080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 14 00:43:34.494012 containerd[1585]: time="2026-04-14T00:43:34.493419114Z" level=info msg="CreateContainer within sandbox \"080926573c2ceec5cdd74ec370ab60e2478dfd43ab5f024db3f3dbff0a5e0d4b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"deb62cb6a48a252e9c399ea069ad0bb1aa5b1e8d2189bce1276f48e847f97dac\"" Apr 14 00:43:34.498660 containerd[1585]: time="2026-04-14T00:43:34.496054973Z" level=info msg="StartContainer for \"deb62cb6a48a252e9c399ea069ad0bb1aa5b1e8d2189bce1276f48e847f97dac\"" Apr 14 00:43:35.008843 kubelet[2695]: I0414 00:43:35.001053 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-vdfvw" podStartSLOduration=46.341422446 podStartE2EDuration="58.994596634s" podCreationTimestamp="2026-04-14 00:42:36 +0000 UTC" firstStartedPulling="2026-04-14 00:43:13.98237537 +0000 UTC m=+56.509377680" lastFinishedPulling="2026-04-14 00:43:26.635549549 +0000 UTC m=+69.162551868" observedRunningTime="2026-04-14 00:43:27.693099007 +0000 UTC m=+70.220101331" watchObservedRunningTime="2026-04-14 00:43:34.994596634 +0000 UTC m=+77.521599016" Apr 14 00:43:35.189273 containerd[1585]: time="2026-04-14T00:43:35.189080020Z" level=info msg="StartContainer for \"deb62cb6a48a252e9c399ea069ad0bb1aa5b1e8d2189bce1276f48e847f97dac\" returns successfully" Apr 14 00:43:35.788205 kubelet[2695]: I0414 00:43:35.787255 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 00:43:35.823736 kubelet[2695]: E0414 00:43:35.823102 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:35.978700 kubelet[2695]: I0414 00:43:35.974460 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66798d99fc-kp248" podStartSLOduration=39.624376354 podStartE2EDuration="58.974430823s" podCreationTimestamp="2026-04-14 00:42:37 +0000 UTC" firstStartedPulling="2026-04-14 00:43:14.336827656 +0000 UTC m=+56.863829979" lastFinishedPulling="2026-04-14 00:43:33.686882136 +0000 UTC m=+76.213884448" observedRunningTime="2026-04-14 00:43:35.024815438 +0000 UTC m=+77.551817761" watchObservedRunningTime="2026-04-14 00:43:35.974430823 +0000 UTC m=+78.501433148" Apr 14 00:43:36.211272 kubelet[2695]: I0414 00:43:36.204046 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-84b6745c75-8x5bs" podStartSLOduration=41.159423496 podStartE2EDuration="1m1.203992667s" podCreationTimestamp="2026-04-14 00:42:35 +0000 UTC" firstStartedPulling="2026-04-14 00:43:14.365359089 +0000 UTC m=+56.892361397" lastFinishedPulling="2026-04-14 00:43:34.409928255 +0000 UTC m=+76.936930568" observedRunningTime="2026-04-14 00:43:36.153616768 +0000 UTC m=+78.680619103" watchObservedRunningTime="2026-04-14 00:43:36.203992667 +0000 UTC m=+78.730995016" Apr 14 00:43:37.300212 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:57336.service - OpenSSH per-connection server daemon (10.0.0.1:57336). Apr 14 00:43:37.477107 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 57336 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:43:37.484675 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:37.520599 systemd-logind[1566]: New session 11 of user core. Apr 14 00:43:37.529759 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 00:43:37.749576 containerd[1585]: time="2026-04-14T00:43:37.748676596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:37.752950 containerd[1585]: time="2026-04-14T00:43:37.752825133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 14 00:43:37.762373 containerd[1585]: time="2026-04-14T00:43:37.762254137Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:37.793608 containerd[1585]: time="2026-04-14T00:43:37.792786052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:37.805719 containerd[1585]: time="2026-04-14T00:43:37.805555375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 3.393264922s" Apr 14 00:43:37.805719 containerd[1585]: time="2026-04-14T00:43:37.805661768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 14 00:43:37.818950 containerd[1585]: time="2026-04-14T00:43:37.818902094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 14 00:43:37.846053 containerd[1585]: time="2026-04-14T00:43:37.844102727Z" level=info msg="CreateContainer within sandbox \"84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 14 00:43:37.994924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459304551.mount: Deactivated successfully. Apr 14 00:43:38.042899 containerd[1585]: time="2026-04-14T00:43:38.035898946Z" level=info msg="CreateContainer within sandbox \"84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"93d36cfc76dd8fa36fce090a09c48d034184509e5159559d167834e067b1d838\"" Apr 14 00:43:38.056403 containerd[1585]: time="2026-04-14T00:43:38.056192440Z" level=info msg="StartContainer for \"93d36cfc76dd8fa36fce090a09c48d034184509e5159559d167834e067b1d838\"" Apr 14 00:43:38.111629 kubelet[2695]: I0414 00:43:38.107467 2695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 00:43:38.697855 sshd[5784]: pam_unix(sshd:session): session closed for user core Apr 14 00:43:38.704680 containerd[1585]: time="2026-04-14T00:43:38.703315887Z" level=info msg="StartContainer for \"93d36cfc76dd8fa36fce090a09c48d034184509e5159559d167834e067b1d838\" returns successfully" Apr 14 00:43:38.721191 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:57336.service: Deactivated successfully. Apr 14 00:43:38.736604 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 00:43:38.736996 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Apr 14 00:43:38.746010 systemd-logind[1566]: Removed session 11. Apr 14 00:43:41.238682 containerd[1585]: time="2026-04-14T00:43:41.237342884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:41.238682 containerd[1585]: time="2026-04-14T00:43:41.238349505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 14 00:43:41.249774 containerd[1585]: time="2026-04-14T00:43:41.249695604Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:41.264762 containerd[1585]: time="2026-04-14T00:43:41.264689377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:41.270850 containerd[1585]: time="2026-04-14T00:43:41.269596196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.446816486s" Apr 14 00:43:41.270850 containerd[1585]: time="2026-04-14T00:43:41.269728932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 14 00:43:41.282471 containerd[1585]: time="2026-04-14T00:43:41.279907141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 14 00:43:41.401445 containerd[1585]: time="2026-04-14T00:43:41.401215606Z" level=info msg="CreateContainer within sandbox \"178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 14 00:43:41.495176 containerd[1585]: time="2026-04-14T00:43:41.494701116Z" level=info msg="CreateContainer within sandbox \"178dca9a84d182382810463365fa9c09e5036df059794aa378cc4a7fc2ec1dce\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9f3949e69897f07c6c62e7b7e1b443f985e910c670c072fe0fcfff741d6db120\"" Apr 14 00:43:41.501009 containerd[1585]: time="2026-04-14T00:43:41.500893480Z" level=info msg="StartContainer for \"9f3949e69897f07c6c62e7b7e1b443f985e910c670c072fe0fcfff741d6db120\"" Apr 14 00:43:41.842289 containerd[1585]: time="2026-04-14T00:43:41.841910802Z" level=info msg="StartContainer for \"9f3949e69897f07c6c62e7b7e1b443f985e910c670c072fe0fcfff741d6db120\" returns successfully" Apr 14 00:43:42.717039 kubelet[2695]: I0414 00:43:42.716000 2695 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 14 00:43:42.721285 kubelet[2695]: I0414 00:43:42.720831 2695 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 14 00:43:43.787684 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:57344.service - OpenSSH per-connection server daemon (10.0.0.1:57344). Apr 14 00:43:43.935920 sshd[5915]: Accepted publickey for core from 10.0.0.1 port 57344 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:43:43.985044 sshd[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:44.021771 systemd-logind[1566]: New session 12 of user core. Apr 14 00:43:44.050745 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 00:43:44.893837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461656783.mount: Deactivated successfully. Apr 14 00:43:45.037773 containerd[1585]: time="2026-04-14T00:43:45.037623318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:45.060000 containerd[1585]: time="2026-04-14T00:43:45.059852512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 14 00:43:45.063618 containerd[1585]: time="2026-04-14T00:43:45.063329136Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:45.074007 containerd[1585]: time="2026-04-14T00:43:45.072654375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 00:43:45.077789 containerd[1585]: time="2026-04-14T00:43:45.077594442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.797639359s" Apr 14 00:43:45.079240 containerd[1585]: time="2026-04-14T00:43:45.077867308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 14 00:43:45.102898 containerd[1585]: time="2026-04-14T00:43:45.102067526Z" level=info msg="CreateContainer within sandbox \"84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 14 00:43:45.209796 containerd[1585]: time="2026-04-14T00:43:45.206769251Z" level=info msg="CreateContainer within sandbox \"84978d04a912eb996ac76c937551b7d060ca32488be120481a53dc7fc38287c5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1acd2b0dfcc34817c5cd0ec5d817dc5598d8b8c98f0b2f403e22faa60c25c5f2\"" Apr 14 00:43:45.214414 containerd[1585]: time="2026-04-14T00:43:45.212397476Z" level=info msg="StartContainer for \"1acd2b0dfcc34817c5cd0ec5d817dc5598d8b8c98f0b2f403e22faa60c25c5f2\"" Apr 14 00:43:45.437868 sshd[5915]: pam_unix(sshd:session): session closed for user core Apr 14 00:43:45.510631 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:57344.service: Deactivated successfully. Apr 14 00:43:45.521924 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 00:43:45.533771 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Apr 14 00:43:45.548190 systemd-logind[1566]: Removed session 12. Apr 14 00:43:45.776406 containerd[1585]: time="2026-04-14T00:43:45.739859063Z" level=info msg="StartContainer for \"1acd2b0dfcc34817c5cd0ec5d817dc5598d8b8c98f0b2f403e22faa60c25c5f2\" returns successfully" Apr 14 00:43:46.532134 kubelet[2695]: I0414 00:43:46.531471 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nxm2k" podStartSLOduration=41.534817909 podStartE2EDuration="1m9.53142335s" podCreationTimestamp="2026-04-14 00:42:37 +0000 UTC" firstStartedPulling="2026-04-14 00:43:13.282097996 +0000 UTC m=+55.809100311" lastFinishedPulling="2026-04-14 00:43:41.278703437 +0000 UTC m=+83.805705752" observedRunningTime="2026-04-14 00:43:42.296632853 +0000 UTC m=+84.823635163" watchObservedRunningTime="2026-04-14 00:43:46.53142335 +0000 UTC m=+89.058425673" Apr 14 00:43:46.532134 kubelet[2695]: I0414 00:43:46.531989 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-699f76cc4f-lckv5" podStartSLOduration=5.262089317 podStartE2EDuration="35.531979521s" podCreationTimestamp="2026-04-14 00:43:11 +0000 UTC" firstStartedPulling="2026-04-14 00:43:14.81684403 +0000 UTC m=+57.343846354" lastFinishedPulling="2026-04-14 00:43:45.086734235 +0000 UTC m=+87.613736558" observedRunningTime="2026-04-14 00:43:46.519921688 +0000 UTC m=+89.046924010" watchObservedRunningTime="2026-04-14 00:43:46.531979521 +0000 UTC m=+89.058981845" Apr 14 00:43:46.819408 kubelet[2695]: E0414 00:43:46.816055 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:50.515670 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:40218.service - OpenSSH per-connection server daemon (10.0.0.1:40218). Apr 14 00:43:50.630243 sshd[5994]: Accepted publickey for core from 10.0.0.1 port 40218 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:43:50.632743 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:50.652846 systemd-logind[1566]: New session 13 of user core. Apr 14 00:43:50.663725 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 00:43:51.605864 sshd[5994]: pam_unix(sshd:session): session closed for user core Apr 14 00:43:51.640455 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:40218.service: Deactivated successfully. Apr 14 00:43:51.656068 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Apr 14 00:43:51.659671 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 00:43:51.669952 systemd-logind[1566]: Removed session 13. Apr 14 00:43:54.815331 kubelet[2695]: E0414 00:43:54.813010 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:43:56.624349 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:35516.service - OpenSSH per-connection server daemon (10.0.0.1:35516). Apr 14 00:43:56.819891 sshd[6032]: Accepted publickey for core from 10.0.0.1 port 35516 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:43:56.831446 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:43:56.898934 systemd-logind[1566]: New session 14 of user core. Apr 14 00:43:56.924211 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 00:43:58.205354 sshd[6032]: pam_unix(sshd:session): session closed for user core Apr 14 00:43:58.235848 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:35516.service: Deactivated successfully. Apr 14 00:43:58.245769 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 00:43:58.246286 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Apr 14 00:43:58.250282 systemd-logind[1566]: Removed session 14. Apr 14 00:44:00.056676 systemd-journald[1170]: Under memory pressure, flushing caches. Apr 14 00:44:00.047114 systemd-resolved[1474]: Under memory pressure, flushing caches. Apr 14 00:44:00.047161 systemd-resolved[1474]: Flushed all caches. Apr 14 00:44:03.236608 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:35532.service - OpenSSH per-connection server daemon (10.0.0.1:35532). Apr 14 00:44:03.542669 sshd[6076]: Accepted publickey for core from 10.0.0.1 port 35532 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:03.545773 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:03.601407 systemd-logind[1566]: New session 15 of user core. Apr 14 00:44:03.622019 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 00:44:04.278326 sshd[6076]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:04.286741 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:35532.service: Deactivated successfully. Apr 14 00:44:04.292769 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 00:44:04.299730 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Apr 14 00:44:04.306132 systemd-logind[1566]: Removed session 15. Apr 14 00:44:06.112687 systemd[1]: run-containerd-runc-k8s.io-37e8fcf64d604a7ece7b26a879ec498808e6c6e18cc92e63efc7b986c4b93485-runc.Voc01M.mount: Deactivated successfully. Apr 14 00:44:09.308795 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:33130.service - OpenSSH per-connection server daemon (10.0.0.1:33130). Apr 14 00:44:09.433351 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 33130 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:09.442913 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:09.465559 systemd-logind[1566]: New session 16 of user core. Apr 14 00:44:09.477439 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 00:44:10.096663 sshd[6112]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:10.107013 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:33130.service: Deactivated successfully. Apr 14 00:44:10.118639 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Apr 14 00:44:10.121565 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 00:44:10.126801 systemd-logind[1566]: Removed session 16. Apr 14 00:44:11.813391 kubelet[2695]: E0414 00:44:11.813266 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:44:15.139286 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:33132.service - OpenSSH per-connection server daemon (10.0.0.1:33132). Apr 14 00:44:15.386981 sshd[6150]: Accepted publickey for core from 10.0.0.1 port 33132 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:15.389940 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:15.403173 systemd-logind[1566]: New session 17 of user core. Apr 14 00:44:15.414655 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 00:44:16.035337 sshd[6150]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:16.093953 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:33132.service: Deactivated successfully. Apr 14 00:44:16.104236 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Apr 14 00:44:16.106406 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 00:44:16.110891 systemd-logind[1566]: Removed session 17. Apr 14 00:44:21.060393 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:53410.service - OpenSSH per-connection server daemon (10.0.0.1:53410). Apr 14 00:44:21.111399 sshd[6169]: Accepted publickey for core from 10.0.0.1 port 53410 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:21.114164 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:21.129152 systemd-logind[1566]: New session 18 of user core. Apr 14 00:44:21.141642 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 00:44:21.505999 sshd[6169]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:21.513451 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:53410.service: Deactivated successfully. Apr 14 00:44:21.518624 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Apr 14 00:44:21.518721 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 00:44:21.523398 systemd-logind[1566]: Removed session 18. Apr 14 00:44:26.530347 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:38822.service - OpenSSH per-connection server daemon (10.0.0.1:38822). Apr 14 00:44:26.592969 sshd[6189]: Accepted publickey for core from 10.0.0.1 port 38822 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:26.599607 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:26.614641 systemd-logind[1566]: New session 19 of user core. Apr 14 00:44:26.624444 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 00:44:27.002320 sshd[6189]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:27.014426 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:38822.service: Deactivated successfully. Apr 14 00:44:27.020644 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 00:44:27.020652 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Apr 14 00:44:27.024907 systemd-logind[1566]: Removed session 19. Apr 14 00:44:29.804898 systemd[1]: run-containerd-runc-k8s.io-bb04ba5a448d04e41aa348c66fe8fa74fc7d43cbc19e1d2512e797dad52bfcd9-runc.hgWODb.mount: Deactivated successfully. Apr 14 00:44:30.820469 kubelet[2695]: E0414 00:44:30.820028 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:44:32.023324 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:38836.service - OpenSSH per-connection server daemon (10.0.0.1:38836). Apr 14 00:44:32.082421 sshd[6226]: Accepted publickey for core from 10.0.0.1 port 38836 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:32.087472 sshd[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:32.104188 systemd-logind[1566]: New session 20 of user core. Apr 14 00:44:32.132609 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 00:44:32.519157 sshd[6226]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:32.527995 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:38836.service: Deactivated successfully. Apr 14 00:44:32.538222 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 00:44:32.545408 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Apr 14 00:44:32.556580 systemd-logind[1566]: Removed session 20. Apr 14 00:44:32.811825 kubelet[2695]: E0414 00:44:32.811382 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:44:35.812351 kubelet[2695]: E0414 00:44:35.812255 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:44:37.538013 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:60340.service - OpenSSH per-connection server daemon (10.0.0.1:60340). Apr 14 00:44:37.594654 sshd[6298]: Accepted publickey for core from 10.0.0.1 port 60340 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:37.596859 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:37.620886 systemd-logind[1566]: New session 21 of user core. Apr 14 00:44:37.632328 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 00:44:37.935315 sshd[6298]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:38.013600 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:60344.service - OpenSSH per-connection server daemon (10.0.0.1:60344). Apr 14 00:44:38.015023 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:60340.service: Deactivated successfully. Apr 14 00:44:38.019359 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 00:44:38.021854 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Apr 14 00:44:38.023935 systemd-logind[1566]: Removed session 21. Apr 14 00:44:38.109379 sshd[6313]: Accepted publickey for core from 10.0.0.1 port 60344 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:38.112149 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:38.119163 systemd-logind[1566]: New session 22 of user core. Apr 14 00:44:38.132866 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 00:44:38.492384 sshd[6313]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:38.502919 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:60352.service - OpenSSH per-connection server daemon (10.0.0.1:60352). Apr 14 00:44:38.503794 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:60344.service: Deactivated successfully. Apr 14 00:44:38.511721 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Apr 14 00:44:38.513743 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 00:44:38.518911 systemd-logind[1566]: Removed session 22. Apr 14 00:44:38.586168 sshd[6332]: Accepted publickey for core from 10.0.0.1 port 60352 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:38.586295 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:38.596105 systemd-logind[1566]: New session 23 of user core. Apr 14 00:44:38.616844 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 00:44:38.898312 sshd[6332]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:38.911014 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:60352.service: Deactivated successfully. Apr 14 00:44:38.924464 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Apr 14 00:44:38.927107 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 00:44:38.933687 systemd-logind[1566]: Removed session 23. Apr 14 00:44:43.914572 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:60362.service - OpenSSH per-connection server daemon (10.0.0.1:60362). Apr 14 00:44:44.014545 sshd[6383]: Accepted publickey for core from 10.0.0.1 port 60362 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:44.020031 sshd[6383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:44.099732 systemd-logind[1566]: New session 24 of user core. Apr 14 00:44:44.115967 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 00:44:44.667750 sshd[6383]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:44.675427 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:60362.service: Deactivated successfully. Apr 14 00:44:44.682223 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 00:44:44.683837 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Apr 14 00:44:44.688669 systemd-logind[1566]: Removed session 24. Apr 14 00:44:49.694027 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:33270.service - OpenSSH per-connection server daemon (10.0.0.1:33270). Apr 14 00:44:49.752747 sshd[6453]: Accepted publickey for core from 10.0.0.1 port 33270 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:49.756006 sshd[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:49.773125 systemd-logind[1566]: New session 25 of user core. Apr 14 00:44:49.787895 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 00:44:50.118760 sshd[6453]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:50.126373 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:33270.service: Deactivated successfully. Apr 14 00:44:50.133482 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 00:44:50.136936 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Apr 14 00:44:50.140339 systemd-logind[1566]: Removed session 25. Apr 14 00:44:55.193790 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:33276.service - OpenSSH per-connection server daemon (10.0.0.1:33276). Apr 14 00:44:55.284312 sshd[6471]: Accepted publickey for core from 10.0.0.1 port 33276 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:44:55.286018 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:44:55.301263 systemd-logind[1566]: New session 26 of user core. Apr 14 00:44:55.314658 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 14 00:44:55.652266 sshd[6471]: pam_unix(sshd:session): session closed for user core Apr 14 00:44:55.664193 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:33276.service: Deactivated successfully. Apr 14 00:44:55.676132 systemd-logind[1566]: Session 26 logged out. Waiting for processes to exit. Apr 14 00:44:55.676371 systemd[1]: session-26.scope: Deactivated successfully. Apr 14 00:44:55.682877 systemd-logind[1566]: Removed session 26. Apr 14 00:44:56.812067 kubelet[2695]: E0414 00:44:56.811595 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:00.692657 systemd[1]: Started sshd@26-10.0.0.6:22-10.0.0.1:49566.service - OpenSSH per-connection server daemon (10.0.0.1:49566). Apr 14 00:45:00.799854 sshd[6507]: Accepted publickey for core from 10.0.0.1 port 49566 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:00.803857 sshd[6507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:00.822008 systemd-logind[1566]: New session 27 of user core. Apr 14 00:45:00.834657 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 14 00:45:01.325979 sshd[6507]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:01.354944 systemd[1]: sshd@26-10.0.0.6:22-10.0.0.1:49566.service: Deactivated successfully. Apr 14 00:45:01.360337 systemd[1]: session-27.scope: Deactivated successfully. Apr 14 00:45:01.363730 systemd-logind[1566]: Session 27 logged out. Waiting for processes to exit. Apr 14 00:45:01.374398 systemd-logind[1566]: Removed session 27. Apr 14 00:45:06.346582 systemd[1]: Started sshd@27-10.0.0.6:22-10.0.0.1:46814.service - OpenSSH per-connection server daemon (10.0.0.1:46814). Apr 14 00:45:06.396934 sshd[6542]: Accepted publickey for core from 10.0.0.1 port 46814 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:06.402700 sshd[6542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:06.426697 systemd-logind[1566]: New session 28 of user core. Apr 14 00:45:06.432566 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 14 00:45:06.772621 sshd[6542]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:06.780820 systemd[1]: sshd@27-10.0.0.6:22-10.0.0.1:46814.service: Deactivated successfully. Apr 14 00:45:06.787224 systemd[1]: session-28.scope: Deactivated successfully. Apr 14 00:45:06.788957 systemd-logind[1566]: Session 28 logged out. Waiting for processes to exit. Apr 14 00:45:06.793080 systemd-logind[1566]: Removed session 28. Apr 14 00:45:06.812539 kubelet[2695]: E0414 00:45:06.812351 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:11.794630 systemd[1]: Started sshd@28-10.0.0.6:22-10.0.0.1:46824.service - OpenSSH per-connection server daemon (10.0.0.1:46824). Apr 14 00:45:11.813069 kubelet[2695]: E0414 00:45:11.812003 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:11.861441 sshd[6585]: Accepted publickey for core from 10.0.0.1 port 46824 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:11.864310 sshd[6585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:11.885136 systemd-logind[1566]: New session 29 of user core. Apr 14 00:45:11.903154 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 14 00:45:12.208467 sshd[6585]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:12.234741 systemd[1]: sshd@28-10.0.0.6:22-10.0.0.1:46824.service: Deactivated successfully. Apr 14 00:45:12.288580 systemd[1]: session-29.scope: Deactivated successfully. Apr 14 00:45:12.293288 systemd-logind[1566]: Session 29 logged out. Waiting for processes to exit. Apr 14 00:45:12.300223 systemd[1]: Started sshd@29-10.0.0.6:22-10.0.0.1:46826.service - OpenSSH per-connection server daemon (10.0.0.1:46826). Apr 14 00:45:12.304259 systemd-logind[1566]: Removed session 29. Apr 14 00:45:12.365888 sshd[6600]: Accepted publickey for core from 10.0.0.1 port 46826 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:12.370721 sshd[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:12.384006 systemd-logind[1566]: New session 30 of user core. Apr 14 00:45:12.405160 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 14 00:45:13.190948 sshd[6600]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:13.203368 systemd[1]: Started sshd@30-10.0.0.6:22-10.0.0.1:46840.service - OpenSSH per-connection server daemon (10.0.0.1:46840). Apr 14 00:45:13.205460 systemd[1]: sshd@29-10.0.0.6:22-10.0.0.1:46826.service: Deactivated successfully. Apr 14 00:45:13.211962 systemd-logind[1566]: Session 30 logged out. Waiting for processes to exit. Apr 14 00:45:13.214484 systemd[1]: session-30.scope: Deactivated successfully. Apr 14 00:45:13.216834 systemd-logind[1566]: Removed session 30. Apr 14 00:45:13.305535 sshd[6611]: Accepted publickey for core from 10.0.0.1 port 46840 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:13.310883 sshd[6611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:13.324214 systemd-logind[1566]: New session 31 of user core. Apr 14 00:45:13.331138 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 14 00:45:14.839381 sshd[6611]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:14.886037 systemd[1]: Started sshd@31-10.0.0.6:22-10.0.0.1:46850.service - OpenSSH per-connection server daemon (10.0.0.1:46850). Apr 14 00:45:14.887296 systemd[1]: sshd@30-10.0.0.6:22-10.0.0.1:46840.service: Deactivated successfully. Apr 14 00:45:14.904192 systemd[1]: session-31.scope: Deactivated successfully. Apr 14 00:45:14.920053 systemd-logind[1566]: Session 31 logged out. Waiting for processes to exit. Apr 14 00:45:14.931688 systemd-logind[1566]: Removed session 31. Apr 14 00:45:15.007243 sshd[6637]: Accepted publickey for core from 10.0.0.1 port 46850 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:15.013958 sshd[6637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:15.028674 systemd-logind[1566]: New session 32 of user core. Apr 14 00:45:15.036218 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 14 00:45:16.677592 sshd[6637]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:16.693595 systemd[1]: Started sshd@32-10.0.0.6:22-10.0.0.1:45326.service - OpenSSH per-connection server daemon (10.0.0.1:45326). Apr 14 00:45:16.694391 systemd[1]: sshd@31-10.0.0.6:22-10.0.0.1:46850.service: Deactivated successfully. Apr 14 00:45:16.707836 systemd[1]: session-32.scope: Deactivated successfully. Apr 14 00:45:16.711725 systemd-logind[1566]: Session 32 logged out. Waiting for processes to exit. Apr 14 00:45:16.720609 systemd-logind[1566]: Removed session 32. Apr 14 00:45:16.912083 sshd[6654]: Accepted publickey for core from 10.0.0.1 port 45326 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:16.916845 sshd[6654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:16.946078 systemd-logind[1566]: New session 33 of user core. Apr 14 00:45:16.950667 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 14 00:45:17.375841 sshd[6654]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:17.390648 systemd[1]: sshd@32-10.0.0.6:22-10.0.0.1:45326.service: Deactivated successfully. Apr 14 00:45:17.413920 systemd[1]: session-33.scope: Deactivated successfully. Apr 14 00:45:17.416167 systemd-logind[1566]: Session 33 logged out. Waiting for processes to exit. Apr 14 00:45:17.420608 systemd-logind[1566]: Removed session 33. Apr 14 00:45:22.393346 systemd[1]: Started sshd@33-10.0.0.6:22-10.0.0.1:45328.service - OpenSSH per-connection server daemon (10.0.0.1:45328). Apr 14 00:45:22.520459 sshd[6675]: Accepted publickey for core from 10.0.0.1 port 45328 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:22.525290 sshd[6675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:22.539379 systemd-logind[1566]: New session 34 of user core. Apr 14 00:45:22.550622 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 14 00:45:22.851795 sshd[6675]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:22.860879 systemd[1]: sshd@33-10.0.0.6:22-10.0.0.1:45328.service: Deactivated successfully. Apr 14 00:45:22.866032 systemd-logind[1566]: Session 34 logged out. Waiting for processes to exit. Apr 14 00:45:22.866056 systemd[1]: session-34.scope: Deactivated successfully. Apr 14 00:45:22.868916 systemd-logind[1566]: Removed session 34. Apr 14 00:45:27.875600 systemd[1]: Started sshd@34-10.0.0.6:22-10.0.0.1:41040.service - OpenSSH per-connection server daemon (10.0.0.1:41040). Apr 14 00:45:27.928044 sshd[6692]: Accepted publickey for core from 10.0.0.1 port 41040 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:27.931316 sshd[6692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:27.941364 systemd-logind[1566]: New session 35 of user core. Apr 14 00:45:27.948457 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 14 00:45:28.320201 sshd[6692]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:28.327404 systemd[1]: sshd@34-10.0.0.6:22-10.0.0.1:41040.service: Deactivated successfully. Apr 14 00:45:28.335182 systemd[1]: session-35.scope: Deactivated successfully. Apr 14 00:45:28.336898 systemd-logind[1566]: Session 35 logged out. Waiting for processes to exit. Apr 14 00:45:28.339213 systemd-logind[1566]: Removed session 35. Apr 14 00:45:32.818817 kubelet[2695]: E0414 00:45:32.816733 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:33.343809 systemd[1]: Started sshd@35-10.0.0.6:22-10.0.0.1:41056.service - OpenSSH per-connection server daemon (10.0.0.1:41056). Apr 14 00:45:33.502713 sshd[6748]: Accepted publickey for core from 10.0.0.1 port 41056 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:33.506958 sshd[6748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:33.522354 systemd-logind[1566]: New session 36 of user core. Apr 14 00:45:33.537767 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 14 00:45:33.813118 kubelet[2695]: E0414 00:45:33.812980 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:33.910229 sshd[6748]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:33.920736 systemd[1]: sshd@35-10.0.0.6:22-10.0.0.1:41056.service: Deactivated successfully. Apr 14 00:45:33.930953 systemd[1]: session-36.scope: Deactivated successfully. Apr 14 00:45:33.933585 systemd-logind[1566]: Session 36 logged out. Waiting for processes to exit. Apr 14 00:45:33.937860 systemd-logind[1566]: Removed session 36. Apr 14 00:45:34.811406 kubelet[2695]: E0414 00:45:34.811277 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:38.923877 systemd[1]: Started sshd@36-10.0.0.6:22-10.0.0.1:41818.service - OpenSSH per-connection server daemon (10.0.0.1:41818). Apr 14 00:45:38.983548 sshd[6783]: Accepted publickey for core from 10.0.0.1 port 41818 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:38.985900 sshd[6783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:39.018845 systemd-logind[1566]: New session 37 of user core. Apr 14 00:45:39.028727 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 14 00:45:39.415159 sshd[6783]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:39.426281 systemd[1]: sshd@36-10.0.0.6:22-10.0.0.1:41818.service: Deactivated successfully. Apr 14 00:45:39.433441 systemd[1]: session-37.scope: Deactivated successfully. Apr 14 00:45:39.436981 systemd-logind[1566]: Session 37 logged out. Waiting for processes to exit. Apr 14 00:45:39.439259 systemd-logind[1566]: Removed session 37. Apr 14 00:45:44.431768 systemd[1]: Started sshd@37-10.0.0.6:22-10.0.0.1:41828.service - OpenSSH per-connection server daemon (10.0.0.1:41828). Apr 14 00:45:44.509907 sshd[6821]: Accepted publickey for core from 10.0.0.1 port 41828 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:44.518326 sshd[6821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:44.539932 systemd-logind[1566]: New session 38 of user core. Apr 14 00:45:44.550464 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 14 00:45:45.021917 sshd[6821]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:45.029967 systemd[1]: sshd@37-10.0.0.6:22-10.0.0.1:41828.service: Deactivated successfully. Apr 14 00:45:45.040973 systemd-logind[1566]: Session 38 logged out. Waiting for processes to exit. Apr 14 00:45:45.042852 systemd[1]: session-38.scope: Deactivated successfully. Apr 14 00:45:45.049148 systemd-logind[1566]: Removed session 38. Apr 14 00:45:45.818951 kubelet[2695]: E0414 00:45:45.818870 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:45:50.048664 systemd[1]: Started sshd@38-10.0.0.6:22-10.0.0.1:33186.service - OpenSSH per-connection server daemon (10.0.0.1:33186). Apr 14 00:45:50.148399 sshd[6856]: Accepted publickey for core from 10.0.0.1 port 33186 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:50.153212 sshd[6856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:50.170459 systemd-logind[1566]: New session 39 of user core. Apr 14 00:45:50.188640 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 14 00:45:50.552856 sshd[6856]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:50.561863 systemd[1]: sshd@38-10.0.0.6:22-10.0.0.1:33186.service: Deactivated successfully. Apr 14 00:45:50.568774 systemd[1]: session-39.scope: Deactivated successfully. Apr 14 00:45:50.570164 systemd-logind[1566]: Session 39 logged out. Waiting for processes to exit. Apr 14 00:45:50.573270 systemd-logind[1566]: Removed session 39. Apr 14 00:45:55.605882 systemd[1]: Started sshd@39-10.0.0.6:22-10.0.0.1:36424.service - OpenSSH per-connection server daemon (10.0.0.1:36424). Apr 14 00:45:55.692188 sshd[6873]: Accepted publickey for core from 10.0.0.1 port 36424 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:45:55.694881 sshd[6873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:45:55.712257 systemd-logind[1566]: New session 40 of user core. Apr 14 00:45:55.725385 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 14 00:45:56.088849 sshd[6873]: pam_unix(sshd:session): session closed for user core Apr 14 00:45:56.094998 systemd[1]: sshd@39-10.0.0.6:22-10.0.0.1:36424.service: Deactivated successfully. Apr 14 00:45:56.100368 systemd[1]: session-40.scope: Deactivated successfully. Apr 14 00:45:56.102404 systemd-logind[1566]: Session 40 logged out. Waiting for processes to exit. Apr 14 00:45:56.106389 systemd-logind[1566]: Removed session 40. Apr 14 00:46:01.067996 systemd[1]: Started sshd@40-10.0.0.6:22-10.0.0.1:36438.service - OpenSSH per-connection server daemon (10.0.0.1:36438). Apr 14 00:46:01.124253 sshd[6922]: Accepted publickey for core from 10.0.0.1 port 36438 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:46:01.127883 sshd[6922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:46:01.138227 systemd-logind[1566]: New session 41 of user core. Apr 14 00:46:01.144300 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 14 00:46:01.523756 sshd[6922]: pam_unix(sshd:session): session closed for user core Apr 14 00:46:01.534202 systemd[1]: sshd@40-10.0.0.6:22-10.0.0.1:36438.service: Deactivated successfully. Apr 14 00:46:01.552863 systemd[1]: session-41.scope: Deactivated successfully. Apr 14 00:46:01.557380 systemd-logind[1566]: Session 41 logged out. Waiting for processes to exit. Apr 14 00:46:01.563160 systemd-logind[1566]: Removed session 41. Apr 14 00:46:05.818947 kubelet[2695]: E0414 00:46:05.818357 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:06.604320 systemd[1]: Started sshd@41-10.0.0.6:22-10.0.0.1:37450.service - OpenSSH per-connection server daemon (10.0.0.1:37450). Apr 14 00:46:06.678600 sshd[6956]: Accepted publickey for core from 10.0.0.1 port 37450 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:46:06.682712 sshd[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:46:06.705624 systemd-logind[1566]: New session 42 of user core. Apr 14 00:46:06.714151 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 14 00:46:07.191589 sshd[6956]: pam_unix(sshd:session): session closed for user core Apr 14 00:46:07.201153 systemd[1]: sshd@41-10.0.0.6:22-10.0.0.1:37450.service: Deactivated successfully. Apr 14 00:46:07.204988 systemd-logind[1566]: Session 42 logged out. Waiting for processes to exit. Apr 14 00:46:07.205088 systemd[1]: session-42.scope: Deactivated successfully. Apr 14 00:46:07.208061 systemd-logind[1566]: Removed session 42. Apr 14 00:46:12.228218 systemd[1]: Started sshd@42-10.0.0.6:22-10.0.0.1:37456.service - OpenSSH per-connection server daemon (10.0.0.1:37456). Apr 14 00:46:12.359911 sshd[6994]: Accepted publickey for core from 10.0.0.1 port 37456 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:46:12.363771 sshd[6994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:46:12.369454 systemd-logind[1566]: New session 43 of user core. Apr 14 00:46:12.384199 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 14 00:46:12.768899 sshd[6994]: pam_unix(sshd:session): session closed for user core Apr 14 00:46:12.772746 systemd[1]: sshd@42-10.0.0.6:22-10.0.0.1:37456.service: Deactivated successfully. Apr 14 00:46:12.775733 systemd[1]: session-43.scope: Deactivated successfully. Apr 14 00:46:12.781033 systemd-logind[1566]: Session 43 logged out. Waiting for processes to exit. Apr 14 00:46:12.793537 systemd-logind[1566]: Removed session 43. Apr 14 00:46:15.813563 kubelet[2695]: E0414 00:46:15.813372 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:17.791315 systemd[1]: Started sshd@43-10.0.0.6:22-10.0.0.1:47774.service - OpenSSH per-connection server daemon (10.0.0.1:47774). Apr 14 00:46:17.908435 sshd[7014]: Accepted publickey for core from 10.0.0.1 port 47774 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:46:17.913638 sshd[7014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:46:17.934839 systemd-logind[1566]: New session 44 of user core. Apr 14 00:46:17.944584 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 14 00:46:18.265437 sshd[7014]: pam_unix(sshd:session): session closed for user core Apr 14 00:46:18.273206 systemd[1]: sshd@43-10.0.0.6:22-10.0.0.1:47774.service: Deactivated successfully. Apr 14 00:46:18.280827 systemd[1]: session-44.scope: Deactivated successfully. Apr 14 00:46:18.283024 systemd-logind[1566]: Session 44 logged out. Waiting for processes to exit. Apr 14 00:46:18.286809 systemd-logind[1566]: Removed session 44. Apr 14 00:46:23.287025 systemd[1]: Started sshd@44-10.0.0.6:22-10.0.0.1:47790.service - OpenSSH per-connection server daemon (10.0.0.1:47790). Apr 14 00:46:23.344986 sshd[7039]: Accepted publickey for core from 10.0.0.1 port 47790 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:46:23.348012 sshd[7039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:46:23.364779 systemd-logind[1566]: New session 45 of user core. Apr 14 00:46:23.374690 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 14 00:46:23.782297 sshd[7039]: pam_unix(sshd:session): session closed for user core Apr 14 00:46:23.789277 systemd[1]: sshd@44-10.0.0.6:22-10.0.0.1:47790.service: Deactivated successfully. Apr 14 00:46:23.792775 systemd-logind[1566]: Session 45 logged out. Waiting for processes to exit. Apr 14 00:46:23.793935 systemd[1]: session-45.scope: Deactivated successfully. Apr 14 00:46:23.797148 systemd-logind[1566]: Removed session 45. Apr 14 00:46:28.812048 systemd[1]: Started sshd@45-10.0.0.6:22-10.0.0.1:51016.service - OpenSSH per-connection server daemon (10.0.0.1:51016). Apr 14 00:46:28.812999 kubelet[2695]: E0414 00:46:28.812026 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 00:46:28.891712 sshd[7080]: Accepted publickey for core from 10.0.0.1 port 51016 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:46:28.904070 sshd[7080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:46:28.926342 systemd-logind[1566]: New session 46 of user core. Apr 14 00:46:28.935970 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 14 00:46:29.323389 sshd[7080]: pam_unix(sshd:session): session closed for user core Apr 14 00:46:29.332404 systemd[1]: sshd@45-10.0.0.6:22-10.0.0.1:51016.service: Deactivated successfully. Apr 14 00:46:29.340793 systemd[1]: session-46.scope: Deactivated successfully. Apr 14 00:46:29.343773 systemd-logind[1566]: Session 46 logged out. Waiting for processes to exit. Apr 14 00:46:29.349419 systemd-logind[1566]: Removed session 46. Apr 14 00:46:34.344992 systemd[1]: Started sshd@46-10.0.0.6:22-10.0.0.1:51018.service - OpenSSH per-connection server daemon (10.0.0.1:51018). Apr 14 00:46:34.416048 sshd[7136]: Accepted publickey for core from 10.0.0.1 port 51018 ssh2: RSA SHA256:K6U3DjgUE7fXEUx9Sn30xuWvGlmV/pnS811HORr3cgQ Apr 14 00:46:34.421446 sshd[7136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 00:46:34.487486 systemd-logind[1566]: New session 47 of user core. Apr 14 00:46:34.496950 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 14 00:46:34.819643 sshd[7136]: pam_unix(sshd:session): session closed for user core Apr 14 00:46:34.825049 systemd[1]: sshd@46-10.0.0.6:22-10.0.0.1:51018.service: Deactivated successfully. Apr 14 00:46:34.830213 systemd-logind[1566]: Session 47 logged out. Waiting for processes to exit. Apr 14 00:46:34.830274 systemd[1]: session-47.scope: Deactivated successfully. Apr 14 00:46:34.835226 systemd-logind[1566]: Removed session 47.