Apr 21 10:47:06.912012 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:47:06.912030 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:47:06.912039 kernel: BIOS-provided physical RAM map: Apr 21 10:47:06.912043 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:47:06.912047 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 21 10:47:06.912052 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 21 10:47:06.912057 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 21 10:47:06.912061 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 21 10:47:06.912065 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 21 10:47:06.912070 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 21 10:47:06.912075 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 21 10:47:06.912079 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 21 10:47:06.912084 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 21 10:47:06.912088 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 21 10:47:06.912094 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 21 10:47:06.912098 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 21 10:47:06.912104 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 21 10:47:06.912109 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 21 10:47:06.912113 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 21 10:47:06.912118 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:47:06.912122 kernel: NX (Execute Disable) protection: active Apr 21 10:47:06.912127 kernel: APIC: Static calls initialized Apr 21 10:47:06.912131 kernel: efi: EFI v2.7 by EDK II Apr 21 10:47:06.912136 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 21 10:47:06.912141 kernel: SMBIOS 2.8 present. Apr 21 10:47:06.912145 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 21 10:47:06.912150 kernel: Hypervisor detected: KVM Apr 21 10:47:06.912155 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:47:06.912160 kernel: kvm-clock: using sched offset of 5565207534 cycles Apr 21 10:47:06.912165 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:47:06.912170 kernel: tsc: Detected 2793.438 MHz processor Apr 21 10:47:06.912175 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:47:06.912180 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:47:06.912185 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 21 10:47:06.912190 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:47:06.912194 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:47:06.912200 kernel: Using GB pages for direct mapping Apr 21 10:47:06.912205 kernel: Secure boot disabled Apr 21 10:47:06.912210 kernel: ACPI: Early table checksum verification disabled Apr 21 10:47:06.912215 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 21 10:47:06.912222 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 10:47:06.912227 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:47:06.912232 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:47:06.912239 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 21 10:47:06.912244 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:47:06.912249 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:47:06.912255 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:47:06.912260 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:47:06.912265 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 10:47:06.912269 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 21 10:47:06.912276 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 21 10:47:06.912281 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 21 10:47:06.912286 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 21 10:47:06.912291 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 21 10:47:06.912296 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 21 10:47:06.912301 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 21 10:47:06.912305 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 21 10:47:06.912310 kernel: No NUMA configuration found Apr 21 10:47:06.912315 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 21 10:47:06.912322 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 21 10:47:06.912327 kernel: Zone ranges: Apr 21 10:47:06.912332 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:47:06.912337 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 21 10:47:06.912342 kernel: Normal empty Apr 21 10:47:06.912347 kernel: Movable zone start for each node Apr 21 10:47:06.912352 kernel: Early memory node ranges Apr 21 10:47:06.912356 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:47:06.912361 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 21 10:47:06.912366 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 21 10:47:06.912373 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 21 10:47:06.912377 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 21 10:47:06.912382 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 21 10:47:06.912387 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 21 10:47:06.912392 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:47:06.912397 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:47:06.912402 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 21 10:47:06.912407 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:47:06.912412 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 21 10:47:06.912418 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 21 10:47:06.912423 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 21 10:47:06.912428 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:47:06.912433 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:47:06.912438 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:47:06.912443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:47:06.912448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:47:06.912453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:47:06.912458 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:47:06.912463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:47:06.912469 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:47:06.912474 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:47:06.912479 kernel: TSC deadline timer available Apr 21 10:47:06.912484 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 21 10:47:06.912489 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:47:06.912494 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:47:06.912498 kernel: kvm-guest: setup PV sched yield Apr 21 10:47:06.912504 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 21 10:47:06.912508 kernel: Booting paravirtualized kernel on KVM Apr 21 10:47:06.912515 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:47:06.912520 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 10:47:06.912525 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 21 10:47:06.912530 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 21 10:47:06.912535 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 10:47:06.912540 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:47:06.912545 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:47:06.912551 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:47:06.912557 kernel: random: crng init done Apr 21 10:47:06.912562 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:47:06.912568 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:47:06.912572 kernel: Fallback order for Node 0: 0 Apr 21 10:47:06.912577 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 21 10:47:06.912582 kernel: Policy zone: DMA32 Apr 21 10:47:06.912587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:47:06.912593 kernel: Memory: 2394676K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 172120K reserved, 0K cma-reserved) Apr 21 10:47:06.912598 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 10:47:06.912604 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:47:06.912609 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:47:06.912614 kernel: Dynamic Preempt: voluntary Apr 21 10:47:06.912619 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:47:06.912630 kernel: rcu: RCU event tracing is enabled. Apr 21 10:47:06.912636 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 10:47:06.912642 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:47:06.912647 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:47:06.912688 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:47:06.912694 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:47:06.912699 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 10:47:06.912705 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 10:47:06.912712 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:47:06.912735 kernel: Console: colour dummy device 80x25 Apr 21 10:47:06.912742 kernel: printk: console [ttyS0] enabled Apr 21 10:47:06.912747 kernel: ACPI: Core revision 20230628 Apr 21 10:47:06.912753 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:47:06.912760 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:47:06.912766 kernel: x2apic enabled Apr 21 10:47:06.912791 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:47:06.912796 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:47:06.912802 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:47:06.912808 kernel: kvm-guest: setup PV IPIs Apr 21 10:47:06.912813 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:47:06.912819 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:47:06.912825 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 10:47:06.912832 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:47:06.912838 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 10:47:06.912843 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 10:47:06.912849 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:47:06.912854 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:47:06.912860 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:47:06.912866 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:47:06.912871 kernel: RETBleed: Vulnerable Apr 21 10:47:06.912877 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:47:06.912884 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:47:06.912890 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:47:06.912895 kernel: active return thunk: its_return_thunk Apr 21 10:47:06.912901 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:47:06.912906 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:47:06.912911 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:47:06.912917 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:47:06.912922 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:47:06.912928 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:47:06.912934 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:47:06.912940 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:47:06.912945 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:47:06.912951 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:47:06.912956 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:47:06.912962 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 10:47:06.912967 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:47:06.912973 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:47:06.912978 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:47:06.912985 kernel: landlock: Up and running. Apr 21 10:47:06.912990 kernel: SELinux: Initializing. Apr 21 10:47:06.912996 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:47:06.913001 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:47:06.913007 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 10:47:06.913013 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:47:06.913018 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:47:06.913024 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 10:47:06.913031 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 10:47:06.913036 kernel: signal: max sigframe size: 3632 Apr 21 10:47:06.913041 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:47:06.913047 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:47:06.913052 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:47:06.913058 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:47:06.913063 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:47:06.913069 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 10:47:06.913074 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 10:47:06.913081 kernel: smpboot: Max logical packages: 1 Apr 21 10:47:06.913087 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 10:47:06.913092 kernel: devtmpfs: initialized Apr 21 10:47:06.913098 kernel: x86/mm: Memory block size: 128MB Apr 21 10:47:06.913103 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 21 10:47:06.913109 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 21 10:47:06.913114 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 21 10:47:06.913120 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 21 10:47:06.913125 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 21 10:47:06.913132 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:47:06.913137 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 10:47:06.913143 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:47:06.913148 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:47:06.913154 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:47:06.913160 kernel: audit: type=2000 audit(1776768425.938:1): state=initialized audit_enabled=0 res=1 Apr 21 10:47:06.913165 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:47:06.913170 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:47:06.913176 kernel: cpuidle: using governor menu Apr 21 10:47:06.913183 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:47:06.913224 kernel: dca service started, version 1.12.1 Apr 21 10:47:06.913231 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:47:06.913236 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:47:06.913242 kernel: PCI: Using configuration type 1 for base access Apr 21 10:47:06.913247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:47:06.913253 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:47:06.913258 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:47:06.913264 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:47:06.913271 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:47:06.913276 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:47:06.913282 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:47:06.913287 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:47:06.913292 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:47:06.913298 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:47:06.913303 kernel: ACPI: Interpreter enabled Apr 21 10:47:06.913309 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:47:06.913314 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:47:06.913321 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:47:06.913327 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:47:06.913332 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:47:06.913338 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:47:06.913464 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:47:06.913527 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:47:06.913582 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:47:06.913591 kernel: PCI host bridge to bus 0000:00 Apr 21 10:47:06.913681 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:47:06.913737 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:47:06.913810 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:47:06.913860 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 10:47:06.913908 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:47:06.913956 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 21 10:47:06.914007 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:47:06.914072 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:47:06.914157 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:47:06.914213 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 21 10:47:06.914267 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 21 10:47:06.914322 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:47:06.914376 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 21 10:47:06.914434 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:47:06.914493 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 21 10:47:06.914549 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 21 10:47:06.914604 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 21 10:47:06.914696 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 21 10:47:06.914760 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 21 10:47:06.914933 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 21 10:47:06.914992 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 21 10:47:06.915048 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 21 10:47:06.915108 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:47:06.915163 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 21 10:47:06.915218 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 21 10:47:06.915272 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 21 10:47:06.915331 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 21 10:47:06.915390 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:47:06.915445 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:47:06.915512 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:47:06.915567 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 21 10:47:06.915622 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 21 10:47:06.915718 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:47:06.915800 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 21 10:47:06.915808 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:47:06.915814 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:47:06.915820 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:47:06.915825 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:47:06.915830 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:47:06.915836 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:47:06.915841 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:47:06.915849 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:47:06.915854 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:47:06.915859 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:47:06.915865 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:47:06.915870 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:47:06.915876 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:47:06.915882 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:47:06.915887 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:47:06.915892 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:47:06.915900 kernel: iommu: Default domain type: Translated Apr 21 10:47:06.915905 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:47:06.915911 kernel: efivars: Registered efivars operations Apr 21 10:47:06.915916 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:47:06.915922 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:47:06.915927 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 21 10:47:06.915932 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 21 10:47:06.915938 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 21 10:47:06.915943 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 21 10:47:06.916000 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:47:06.916054 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:47:06.916110 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:47:06.916116 kernel: vgaarb: loaded Apr 21 10:47:06.916122 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:47:06.916128 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:47:06.916133 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:47:06.916139 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:47:06.916144 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:47:06.916151 kernel: pnp: PnP ACPI init Apr 21 10:47:06.916210 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:47:06.916218 kernel: pnp: PnP ACPI: found 6 devices Apr 21 10:47:06.916224 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:47:06.916229 kernel: NET: Registered PF_INET protocol family Apr 21 10:47:06.916235 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:47:06.916240 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:47:06.916246 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:47:06.916253 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:47:06.916259 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:47:06.916264 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:47:06.916270 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:47:06.916276 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:47:06.916281 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:47:06.916287 kernel: NET: Registered PF_XDP protocol family Apr 21 10:47:06.916343 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 21 10:47:06.916399 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 21 10:47:06.916455 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:47:06.916505 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:47:06.916554 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:47:06.916602 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 10:47:06.916686 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:47:06.916738 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 21 10:47:06.916745 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:47:06.916751 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:47:06.916759 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 10:47:06.916764 kernel: Initialise system trusted keyrings Apr 21 10:47:06.916791 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:47:06.916797 kernel: Key type asymmetric registered Apr 21 10:47:06.916802 kernel: Asymmetric key parser 'x509' registered Apr 21 10:47:06.916808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:47:06.916813 kernel: io scheduler mq-deadline registered Apr 21 10:47:06.916819 kernel: io scheduler kyber registered Apr 21 10:47:06.916824 kernel: io scheduler bfq registered Apr 21 10:47:06.916832 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:47:06.916838 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:47:06.916843 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:47:06.916849 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 10:47:06.916854 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:47:06.916859 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:47:06.916865 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:47:06.916871 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:47:06.916876 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:47:06.916937 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 10:47:06.916990 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 10:47:06.916997 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 21 10:47:06.917047 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T10:47:06 UTC (1776768426) Apr 21 10:47:06.917098 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 21 10:47:06.917104 kernel: intel_pstate: CPU model not supported Apr 21 10:47:06.917128 kernel: efifb: probing for efifb Apr 21 10:47:06.917134 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 21 10:47:06.917141 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 21 10:47:06.917146 kernel: efifb: scrolling: redraw Apr 21 10:47:06.917152 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 21 10:47:06.917157 kernel: Console: switching to colour frame buffer device 100x37 Apr 21 10:47:06.917180 kernel: fb0: EFI VGA frame buffer device Apr 21 10:47:06.917197 kernel: pstore: Using crash dump compression: deflate Apr 21 10:47:06.917204 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:47:06.917225 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:47:06.917244 kernel: Segment Routing with IPv6 Apr 21 10:47:06.917252 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:47:06.917258 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:47:06.917264 kernel: Key type dns_resolver registered Apr 21 10:47:06.917269 kernel: IPI shorthand broadcast: enabled Apr 21 10:47:06.917275 kernel: sched_clock: Marking stable (959013718, 256108026)->(1298617588, -83495844) Apr 21 10:47:06.917281 kernel: registered taskstats version 1 Apr 21 10:47:06.917287 kernel: Loading compiled-in X.509 certificates Apr 21 10:47:06.917292 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:47:06.917298 kernel: Key type .fscrypt registered Apr 21 10:47:06.917305 kernel: Key type fscrypt-provisioning registered Apr 21 10:47:06.917310 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:47:06.917316 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:47:06.917335 kernel: ima: No architecture policies found Apr 21 10:47:06.917341 kernel: clk: Disabling unused clocks Apr 21 10:47:06.917346 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:47:06.917352 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:47:06.917358 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:47:06.917363 kernel: Run /init as init process Apr 21 10:47:06.917370 kernel: with arguments: Apr 21 10:47:06.917376 kernel: /init Apr 21 10:47:06.917381 kernel: with environment: Apr 21 10:47:06.917387 kernel: HOME=/ Apr 21 10:47:06.917392 kernel: TERM=linux Apr 21 10:47:06.917400 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:47:06.917408 systemd[1]: Detected virtualization kvm. Apr 21 10:47:06.917416 systemd[1]: Detected architecture x86-64. Apr 21 10:47:06.917421 systemd[1]: Running in initrd. Apr 21 10:47:06.917427 systemd[1]: No hostname configured, using default hostname. Apr 21 10:47:06.917433 systemd[1]: Hostname set to . Apr 21 10:47:06.917439 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:47:06.917447 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:47:06.917453 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:47:06.917459 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:47:06.917465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:47:06.917472 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:47:06.917478 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:47:06.917484 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:47:06.917493 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:47:06.917500 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:47:06.917506 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:47:06.917512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:47:06.917517 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:47:06.917523 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:47:06.917529 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:47:06.917535 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:47:06.917543 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:47:06.917549 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:47:06.917555 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:47:06.917561 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:47:06.917567 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:47:06.917573 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:47:06.917579 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:47:06.917585 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:47:06.917590 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:47:06.917598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:47:06.917604 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:47:06.917610 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:47:06.917616 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:47:06.917621 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:47:06.917627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:47:06.917633 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:47:06.917682 systemd-journald[194]: Collecting audit messages is disabled. Apr 21 10:47:06.917700 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:47:06.917706 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:47:06.917715 systemd-journald[194]: Journal started Apr 21 10:47:06.917729 systemd-journald[194]: Runtime Journal (/run/log/journal/4290300c672549b29980a63375e7a20a) is 6.0M, max 48.3M, 42.2M free. Apr 21 10:47:06.922695 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:47:06.924404 systemd-modules-load[195]: Inserted module 'overlay' Apr 21 10:47:06.935824 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:47:06.940689 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:47:06.941560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:47:06.946335 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:47:06.951204 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:47:06.959609 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:47:06.968729 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:47:06.969900 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:47:06.977085 kernel: Bridge firewalling registered Apr 21 10:47:06.973444 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 21 10:47:06.973954 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:47:06.976535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:47:06.977439 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:47:06.990914 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:47:06.994490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:47:07.002950 dracut-cmdline[225]: dracut-dracut-053 Apr 21 10:47:07.006648 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:47:07.010396 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:47:07.010793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:47:07.039607 systemd-resolved[244]: Positive Trust Anchors: Apr 21 10:47:07.039640 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:47:07.039705 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:47:07.041556 systemd-resolved[244]: Defaulting to hostname 'linux'. Apr 21 10:47:07.042297 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:47:07.044457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:47:07.104696 kernel: SCSI subsystem initialized Apr 21 10:47:07.112731 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:47:07.123711 kernel: iscsi: registered transport (tcp) Apr 21 10:47:07.142168 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:47:07.142210 kernel: QLogic iSCSI HBA Driver Apr 21 10:47:07.173471 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:47:07.182864 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:47:07.205682 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:47:07.205713 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:47:07.207706 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:47:07.245723 kernel: raid6: avx512x4 gen() 43713 MB/s Apr 21 10:47:07.263725 kernel: raid6: avx512x2 gen() 42290 MB/s Apr 21 10:47:07.280718 kernel: raid6: avx512x1 gen() 41758 MB/s Apr 21 10:47:07.298714 kernel: raid6: avx2x4 gen() 36188 MB/s Apr 21 10:47:07.316695 kernel: raid6: avx2x2 gen() 30981 MB/s Apr 21 10:47:07.335725 kernel: raid6: avx2x1 gen() 17125 MB/s Apr 21 10:47:07.335746 kernel: raid6: using algorithm avx512x4 gen() 43713 MB/s Apr 21 10:47:07.355917 kernel: raid6: .... xor() 9457 MB/s, rmw enabled Apr 21 10:47:07.355942 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:47:07.380726 kernel: xor: automatically using best checksumming function avx Apr 21 10:47:07.526737 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:47:07.536618 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:47:07.548917 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:47:07.560143 systemd-udevd[416]: Using default interface naming scheme 'v255'. Apr 21 10:47:07.563189 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:47:07.567810 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:47:07.579498 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Apr 21 10:47:07.602597 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:47:07.610823 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:47:07.645322 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:47:07.659860 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:47:07.673944 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:47:07.675413 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:47:07.682003 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:47:07.684837 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:47:07.688356 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:47:07.703760 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 10:47:07.705981 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:47:07.710258 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:47:07.710432 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:47:07.717287 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:47:07.720374 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:47:07.720566 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:47:07.723105 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:47:07.729889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:47:07.733769 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:47:07.741227 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:47:07.746346 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:47:07.746364 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 10:47:07.746583 kernel: libata version 3.00 loaded. Apr 21 10:47:07.746591 kernel: AES CTR mode by8 optimization enabled Apr 21 10:47:07.741389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:47:07.758572 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:47:07.758587 kernel: GPT:9289727 != 19775487 Apr 21 10:47:07.758594 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:47:07.758601 kernel: GPT:9289727 != 19775487 Apr 21 10:47:07.758607 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:47:07.758632 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:47:07.762794 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:47:07.762940 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:47:07.763063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:47:07.771643 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:47:07.771874 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:47:07.777558 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Apr 21 10:47:07.778721 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (477) Apr 21 10:47:07.781076 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 10:47:07.790485 kernel: scsi host0: ahci Apr 21 10:47:07.790602 kernel: scsi host1: ahci Apr 21 10:47:07.790714 kernel: scsi host2: ahci Apr 21 10:47:07.790806 kernel: scsi host3: ahci Apr 21 10:47:07.790873 kernel: scsi host4: ahci Apr 21 10:47:07.790943 kernel: scsi host5: ahci Apr 21 10:47:07.791006 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 21 10:47:07.794349 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 21 10:47:07.794378 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 21 10:47:07.794387 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 21 10:47:07.794395 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 21 10:47:07.794402 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 21 10:47:07.802417 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 10:47:07.803762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:47:07.815727 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 10:47:07.817078 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 10:47:07.830579 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:47:07.843920 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:47:07.848821 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:47:07.855180 disk-uuid[568]: Primary Header is updated. Apr 21 10:47:07.855180 disk-uuid[568]: Secondary Entries is updated. Apr 21 10:47:07.855180 disk-uuid[568]: Secondary Header is updated. Apr 21 10:47:07.857563 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:47:07.874623 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:47:08.101711 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:47:08.101797 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:47:08.110798 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:47:08.110838 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:47:08.111712 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:47:08.113713 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:47:08.114691 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:47:08.116994 kernel: ata3.00: applying bridge limits Apr 21 10:47:08.117108 kernel: ata3.00: configured for UDMA/100 Apr 21 10:47:08.121723 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:47:08.177959 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:47:08.178293 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:47:08.194742 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:47:08.863829 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 10:47:08.863876 disk-uuid[569]: The operation has completed successfully. Apr 21 10:47:08.886845 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:47:08.886944 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:47:08.904889 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:47:08.909505 sh[606]: Success Apr 21 10:47:08.920798 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:47:08.947998 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:47:08.968020 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:47:08.970405 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:47:08.982514 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:47:08.982539 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:47:08.982548 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:47:08.984524 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:47:08.985978 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:47:08.993361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:47:08.996425 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:47:09.008815 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:47:09.010726 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:47:09.025161 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:47:09.025188 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:47:09.025196 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:47:09.030713 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:47:09.038161 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:47:09.041763 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:47:09.048123 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:47:09.054894 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:47:09.097358 ignition[708]: Ignition 2.19.0 Apr 21 10:47:09.097365 ignition[708]: Stage: fetch-offline Apr 21 10:47:09.097388 ignition[708]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:47:09.097394 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:47:09.097453 ignition[708]: parsed url from cmdline: "" Apr 21 10:47:09.097455 ignition[708]: no config URL provided Apr 21 10:47:09.097458 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:47:09.097463 ignition[708]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:47:09.097480 ignition[708]: op(1): [started] loading QEMU firmware config module Apr 21 10:47:09.097483 ignition[708]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 10:47:09.106540 ignition[708]: op(1): [finished] loading QEMU firmware config module Apr 21 10:47:09.106552 ignition[708]: QEMU firmware config was not found. Ignoring... Apr 21 10:47:09.130090 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:47:09.143063 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:47:09.178312 systemd-networkd[795]: lo: Link UP Apr 21 10:47:09.178334 systemd-networkd[795]: lo: Gained carrier Apr 21 10:47:09.179523 systemd-networkd[795]: Enumeration completed Apr 21 10:47:09.179614 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:47:09.180969 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:47:09.180972 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:47:09.181889 systemd-networkd[795]: eth0: Link UP Apr 21 10:47:09.181892 systemd-networkd[795]: eth0: Gained carrier Apr 21 10:47:09.181899 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:47:09.187754 systemd[1]: Reached target network.target - Network. Apr 21 10:47:09.273833 systemd-networkd[795]: eth0: DHCPv4 address 10.0.0.157/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:47:09.355801 ignition[708]: parsing config with SHA512: e80f93d7662808ff4bc77c0e43fba3a79b3413a3d36d11bb55f787dec2214207012cc9275204dc0ec48c546e5b8b31df585e9a110b74761633c87a04508bd3e5 Apr 21 10:47:09.370401 unknown[708]: fetched base config from "system" Apr 21 10:47:09.371183 unknown[708]: fetched user config from "qemu" Apr 21 10:47:09.373827 ignition[708]: fetch-offline: fetch-offline passed Apr 21 10:47:09.373908 ignition[708]: Ignition finished successfully Apr 21 10:47:09.381419 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:47:09.388822 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 10:47:09.412208 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:47:09.458543 ignition[799]: Ignition 2.19.0 Apr 21 10:47:09.458568 ignition[799]: Stage: kargs Apr 21 10:47:09.459571 ignition[799]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:47:09.459581 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:47:09.461554 ignition[799]: kargs: kargs passed Apr 21 10:47:09.461598 ignition[799]: Ignition finished successfully Apr 21 10:47:09.471408 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:47:09.489618 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:47:09.521161 ignition[807]: Ignition 2.19.0 Apr 21 10:47:09.521189 ignition[807]: Stage: disks Apr 21 10:47:09.521394 ignition[807]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:47:09.521405 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:47:09.525527 ignition[807]: disks: disks passed Apr 21 10:47:09.531946 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:47:09.525584 ignition[807]: Ignition finished successfully Apr 21 10:47:09.539157 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:47:09.545360 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:47:09.555883 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:47:09.557364 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:47:09.564373 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:47:09.580965 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:47:09.600359 systemd-fsck[818]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:47:09.612322 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:47:09.632858 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:47:09.799924 kernel: EXT4-fs (vda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:47:09.801598 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:47:09.806922 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:47:09.825927 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:47:09.829104 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:47:09.831939 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:47:09.831986 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:47:09.832011 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:47:09.843019 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:47:09.851282 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:47:09.875913 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (826) Apr 21 10:47:09.875947 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:47:09.875961 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:47:09.875977 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:47:09.875990 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:47:09.878951 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:47:09.915518 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:47:09.920000 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:47:09.927580 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:47:09.937490 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:47:10.042153 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:47:10.060962 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:47:10.063370 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:47:10.077131 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:47:10.082801 kernel: BTRFS info (device vda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:47:10.099989 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:47:10.108497 ignition[942]: INFO : Ignition 2.19.0 Apr 21 10:47:10.108497 ignition[942]: INFO : Stage: mount Apr 21 10:47:10.114973 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:47:10.114973 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:47:10.114973 ignition[942]: INFO : mount: mount passed Apr 21 10:47:10.114973 ignition[942]: INFO : Ignition finished successfully Apr 21 10:47:10.120555 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:47:10.134238 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:47:10.148614 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:47:10.166729 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (954) Apr 21 10:47:10.172140 kernel: BTRFS info (device vda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:47:10.172215 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:47:10.172231 kernel: BTRFS info (device vda6): using free space tree Apr 21 10:47:10.182772 kernel: BTRFS info (device vda6): auto enabling async discard Apr 21 10:47:10.184850 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:47:10.225277 ignition[971]: INFO : Ignition 2.19.0 Apr 21 10:47:10.225277 ignition[971]: INFO : Stage: files Apr 21 10:47:10.229214 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:47:10.229214 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:47:10.236163 ignition[971]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:47:10.244572 ignition[971]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:47:10.250068 ignition[971]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:47:10.253491 ignition[971]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:47:10.257278 ignition[971]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:47:10.263069 unknown[971]: wrote ssh authorized keys file for user: core Apr 21 10:47:10.266245 ignition[971]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:47:10.270602 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:47:10.278733 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:47:10.849940 systemd-networkd[795]: eth0: Gained IPv6LL Apr 21 10:47:11.044194 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:47:12.875035 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:47:12.875035 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:47:12.882676 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:47:13.176139 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:47:13.481728 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:47:13.481728 ignition[971]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:47:13.489118 ignition[971]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:47:13.493038 ignition[971]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:47:13.493038 ignition[971]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:47:13.493038 ignition[971]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 10:47:13.493038 ignition[971]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:47:13.505036 ignition[971]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 10:47:13.505036 ignition[971]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 10:47:13.505036 ignition[971]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 10:47:13.534202 ignition[971]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:47:13.537496 ignition[971]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 10:47:13.540512 ignition[971]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 10:47:13.540512 ignition[971]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:47:13.540512 ignition[971]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:47:13.540512 ignition[971]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:47:13.540512 ignition[971]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:47:13.540512 ignition[971]: INFO : files: files passed Apr 21 10:47:13.540512 ignition[971]: INFO : Ignition finished successfully Apr 21 10:47:13.552440 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:47:13.569936 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:47:13.574604 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:47:13.578347 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:47:13.578429 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:47:13.586879 initrd-setup-root-after-ignition[999]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 10:47:13.591535 initrd-setup-root-after-ignition[1002]: grep: Apr 21 10:47:13.591535 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:47:13.595904 initrd-setup-root-after-ignition[1002]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:47:13.595904 initrd-setup-root-after-ignition[1002]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:47:13.602431 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:47:13.604963 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:47:13.617788 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:47:13.637047 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:47:13.637153 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:47:13.640385 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:47:13.647439 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:47:13.648207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:47:13.662859 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:47:13.674029 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:47:13.701875 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:47:13.714282 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:47:13.715209 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:47:13.719511 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:47:13.723644 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:47:13.723827 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:47:13.730122 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:47:13.733926 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:47:13.737288 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:47:13.740743 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:47:13.744682 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:47:13.748623 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:47:13.749582 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:47:13.754629 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:47:13.759315 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:47:13.762574 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:47:13.766108 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:47:13.766210 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:47:13.771922 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:47:13.775543 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:47:13.779493 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:47:13.783362 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:47:13.784165 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:47:13.784273 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:47:13.791734 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:47:13.791884 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:47:13.795739 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:47:13.799065 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:47:13.801163 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:47:13.804823 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:47:13.808412 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:47:13.812708 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:47:13.812767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:47:13.815998 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:47:13.816047 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:47:13.819437 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:47:13.819507 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:47:13.820398 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:47:13.820456 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:47:13.840875 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:47:13.844866 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:47:13.846487 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:47:13.854926 ignition[1027]: INFO : Ignition 2.19.0 Apr 21 10:47:13.854926 ignition[1027]: INFO : Stage: umount Apr 21 10:47:13.854926 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:47:13.854926 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 10:47:13.854926 ignition[1027]: INFO : umount: umount passed Apr 21 10:47:13.854926 ignition[1027]: INFO : Ignition finished successfully Apr 21 10:47:13.846566 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:47:13.850708 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:47:13.850980 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:47:13.856209 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:47:13.856287 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:47:13.858277 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:47:13.858357 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:47:13.860369 systemd[1]: Stopped target network.target - Network. Apr 21 10:47:13.864230 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:47:13.864268 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:47:13.870432 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:47:13.870461 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:47:13.875964 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:47:13.875997 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:47:13.877768 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:47:13.877830 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:47:13.878561 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:47:13.884734 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:47:13.888763 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:47:13.891728 systemd-networkd[795]: eth0: DHCPv6 lease lost Apr 21 10:47:13.893292 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:47:13.893387 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:47:13.897770 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:47:13.897922 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:47:13.901351 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:47:13.901380 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:47:13.922931 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:47:13.925299 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:47:13.925341 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:47:13.929088 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:47:13.929119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:47:13.932073 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:47:13.932104 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:47:13.936217 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:47:13.936251 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:47:13.941044 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:47:13.950418 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:47:13.950491 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:47:13.950728 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:47:13.950756 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:47:13.978954 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:47:13.979075 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:47:13.989489 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:47:13.989641 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:47:13.993731 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:47:13.993757 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:47:13.997779 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:47:13.997831 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:47:14.001532 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:47:14.001565 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:47:14.006020 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:47:14.006050 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:47:14.010707 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:47:14.010741 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:47:14.036897 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:47:14.038134 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:47:14.038183 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:47:14.042255 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:47:14.042284 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:47:14.046078 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:47:14.046119 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:47:14.050616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:47:14.050688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:47:14.064895 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:47:14.064988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:47:14.068905 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:47:14.076714 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:47:14.085877 systemd[1]: Switching root. Apr 21 10:47:14.115899 systemd-journald[194]: Journal stopped Apr 21 10:47:14.861065 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 21 10:47:14.861113 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:47:14.861128 kernel: SELinux: policy capability open_perms=1 Apr 21 10:47:14.861138 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:47:14.861145 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:47:14.861156 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:47:14.861164 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:47:14.861171 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:47:14.861179 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:47:14.861186 kernel: audit: type=1403 audit(1776768434.226:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:47:14.861197 systemd[1]: Successfully loaded SELinux policy in 36.941ms. Apr 21 10:47:14.861210 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.778ms. Apr 21 10:47:14.861221 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:47:14.861229 systemd[1]: Detected virtualization kvm. Apr 21 10:47:14.861237 systemd[1]: Detected architecture x86-64. Apr 21 10:47:14.861244 systemd[1]: Detected first boot. Apr 21 10:47:14.861252 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:47:14.861259 zram_generator::config[1072]: No configuration found. Apr 21 10:47:14.861268 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:47:14.861277 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:47:14.861286 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:47:14.861295 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:47:14.861303 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:47:14.861311 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:47:14.861318 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:47:14.861326 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:47:14.861333 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:47:14.861341 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:47:14.861349 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:47:14.861358 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:47:14.861369 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:47:14.861377 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:47:14.861385 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:47:14.861393 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:47:14.861401 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:47:14.861412 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:47:14.861419 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:47:14.861427 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:47:14.861436 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:47:14.861443 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:47:14.861451 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:47:14.861460 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:47:14.861468 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:47:14.861476 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:47:14.861483 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:47:14.861491 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:47:14.861500 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:47:14.861508 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:47:14.861515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:47:14.861523 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:47:14.861531 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:47:14.861539 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:47:14.861546 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:47:14.861557 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:47:14.861566 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:47:14.861575 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:47:14.861583 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:47:14.861591 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:47:14.861599 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:47:14.861607 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:47:14.861615 systemd[1]: Reached target machines.target - Containers. Apr 21 10:47:14.861622 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:47:14.861630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:47:14.861639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:47:14.861681 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:47:14.861689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:47:14.861697 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:47:14.861705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:47:14.861714 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:47:14.861721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:47:14.861730 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:47:14.861739 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:47:14.861748 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:47:14.861755 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:47:14.861763 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:47:14.861771 kernel: fuse: init (API version 7.39) Apr 21 10:47:14.861778 kernel: loop: module loaded Apr 21 10:47:14.861786 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:47:14.861793 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:47:14.861820 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:47:14.861828 kernel: ACPI: bus type drm_connector registered Apr 21 10:47:14.861849 systemd-journald[1153]: Collecting audit messages is disabled. Apr 21 10:47:14.861865 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:47:14.861874 systemd-journald[1153]: Journal started Apr 21 10:47:14.861892 systemd-journald[1153]: Runtime Journal (/run/log/journal/4290300c672549b29980a63375e7a20a) is 6.0M, max 48.3M, 42.2M free. Apr 21 10:47:14.554997 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:47:14.584146 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 10:47:14.584471 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:47:14.867984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:47:14.872107 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:47:14.872138 systemd[1]: Stopped verity-setup.service. Apr 21 10:47:14.877757 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:47:14.880713 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:47:14.882199 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:47:14.884246 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:47:14.886354 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:47:14.888289 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:47:14.890386 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:47:14.892518 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:47:14.894537 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:47:14.897082 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:47:14.899535 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:47:14.899757 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:47:14.902198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:47:14.902331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:47:14.904774 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:47:14.904940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:47:14.907153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:47:14.907297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:47:14.909991 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:47:14.910137 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:47:14.912376 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:47:14.912514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:47:14.914922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:47:14.917192 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:47:14.919758 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:47:14.922252 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:47:14.932281 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:47:14.946790 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:47:14.949974 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:47:14.952048 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:47:14.952088 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:47:14.954640 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:47:14.957335 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:47:14.960262 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:47:14.962215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:47:14.963081 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:47:14.965768 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:47:14.968236 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:47:14.969012 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:47:14.971192 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:47:14.972590 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:47:14.974867 systemd-journald[1153]: Time spent on flushing to /var/log/journal/4290300c672549b29980a63375e7a20a is 15.412ms for 996 entries. Apr 21 10:47:14.974867 systemd-journald[1153]: System Journal (/var/log/journal/4290300c672549b29980a63375e7a20a) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:47:14.997359 systemd-journald[1153]: Received client request to flush runtime journal. Apr 21 10:47:14.997390 kernel: loop0: detected capacity change from 0 to 228704 Apr 21 10:47:14.977565 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:47:14.980555 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:47:14.983841 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:47:14.987609 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:47:14.989960 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:47:14.994203 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:47:15.005121 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:47:15.008319 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:47:15.014238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:47:15.016836 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:47:15.024737 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:47:15.022771 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Apr 21 10:47:15.022783 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Apr 21 10:47:15.028549 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:47:15.033185 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:47:15.039406 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:47:15.042761 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:47:15.049074 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:47:15.055003 kernel: loop1: detected capacity change from 0 to 140768 Apr 21 10:47:15.050361 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:47:15.065118 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:47:15.080977 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:47:15.096561 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Apr 21 10:47:15.096591 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Apr 21 10:47:15.100164 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:47:15.103958 kernel: loop2: detected capacity change from 0 to 142488 Apr 21 10:47:15.145696 kernel: loop3: detected capacity change from 0 to 228704 Apr 21 10:47:15.159721 kernel: loop4: detected capacity change from 0 to 140768 Apr 21 10:47:15.170686 kernel: loop5: detected capacity change from 0 to 142488 Apr 21 10:47:15.180606 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 10:47:15.181006 (sd-merge)[1221]: Merged extensions into '/usr'. Apr 21 10:47:15.184076 systemd[1]: Reloading requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:47:15.184101 systemd[1]: Reloading... Apr 21 10:47:15.223712 zram_generator::config[1247]: No configuration found. Apr 21 10:47:15.256892 ldconfig[1182]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:47:15.307987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:47:15.337475 systemd[1]: Reloading finished in 153 ms. Apr 21 10:47:15.369617 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:47:15.372141 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:47:15.374618 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:47:15.388888 systemd[1]: Starting ensure-sysext.service... Apr 21 10:47:15.391402 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:47:15.394485 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:47:15.398217 systemd[1]: Reloading requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:47:15.398242 systemd[1]: Reloading... Apr 21 10:47:15.406071 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:47:15.406266 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:47:15.406876 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:47:15.407044 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Apr 21 10:47:15.407101 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Apr 21 10:47:15.409047 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:47:15.409054 systemd-tmpfiles[1286]: Skipping /boot Apr 21 10:47:15.414325 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:47:15.414360 systemd-tmpfiles[1286]: Skipping /boot Apr 21 10:47:15.421276 systemd-udevd[1287]: Using default interface naming scheme 'v255'. Apr 21 10:47:15.436749 zram_generator::config[1310]: No configuration found. Apr 21 10:47:15.475690 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1340) Apr 21 10:47:15.515742 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 10:47:15.521718 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:47:15.526017 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:47:15.552454 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 10:47:15.552700 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:47:15.552795 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:47:15.552912 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:47:15.552974 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:47:15.572547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 10:47:15.575274 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:47:15.575322 systemd[1]: Reloading finished in 176 ms. Apr 21 10:47:15.579894 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:47:15.592873 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:47:15.647231 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:47:15.684254 systemd[1]: Finished ensure-sysext.service. Apr 21 10:47:15.712219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:47:15.717875 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:47:15.721126 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:47:15.723552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:47:15.724409 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:47:15.727510 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:47:15.730917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:47:15.734436 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:47:15.736893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:47:15.737620 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:47:15.740883 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:47:15.748110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:47:15.750479 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:47:15.754834 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:47:15.759126 augenrules[1409]: No rules Apr 21 10:47:15.759937 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:47:15.763211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:47:15.767862 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:47:15.768463 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:47:15.771103 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:47:15.773365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:47:15.773499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:47:15.775970 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:47:15.776084 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:47:15.778461 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:47:15.778606 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:47:15.781309 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:47:15.781423 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:47:15.783707 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:47:15.784994 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:47:15.797963 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:47:15.799049 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:47:15.799092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:47:15.802785 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:47:15.805131 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:47:15.806707 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:47:15.807080 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:47:15.808535 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:47:15.812054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:47:15.816976 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:47:15.816956 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:47:15.833351 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:47:15.844936 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:47:15.847879 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:47:15.851271 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:47:15.860130 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:47:15.876959 systemd-networkd[1404]: lo: Link UP Apr 21 10:47:15.876983 systemd-networkd[1404]: lo: Gained carrier Apr 21 10:47:15.877851 systemd-networkd[1404]: Enumeration completed Apr 21 10:47:15.877921 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:47:15.879529 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:47:15.879532 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:47:15.880146 systemd-networkd[1404]: eth0: Link UP Apr 21 10:47:15.880163 systemd-networkd[1404]: eth0: Gained carrier Apr 21 10:47:15.880173 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:47:15.880613 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:47:15.883163 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:47:15.884274 systemd-resolved[1406]: Positive Trust Anchors: Apr 21 10:47:15.884309 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:47:15.884335 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:47:15.886086 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:47:15.887320 systemd-resolved[1406]: Defaulting to hostname 'linux'. Apr 21 10:47:15.896923 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:47:15.899278 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:47:15.901526 systemd[1]: Reached target network.target - Network. Apr 21 10:47:15.902718 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.157/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 10:47:15.903243 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Apr 21 10:47:15.903360 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:47:16.459270 systemd-resolved[1406]: Clock change detected. Flushing caches. Apr 21 10:47:16.459312 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 10:47:16.459341 systemd-timesyncd[1407]: Initial clock synchronization to Tue 2026-04-21 10:47:16.459222 UTC. Apr 21 10:47:16.461107 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:47:16.463135 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:47:16.465475 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:47:16.467921 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:47:16.469979 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:47:16.472372 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:47:16.474662 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:47:16.474702 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:47:16.476371 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:47:16.478704 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:47:16.481733 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:47:16.494762 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:47:16.497512 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:47:16.499675 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:47:16.501494 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:47:16.503238 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:47:16.503270 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:47:16.504133 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:47:16.506794 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:47:16.508299 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:47:16.511054 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:47:16.512945 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:47:16.513669 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:47:16.517946 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:47:16.521972 jq[1452]: false Apr 21 10:47:16.522535 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:47:16.528994 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:47:16.531659 extend-filesystems[1453]: Found loop3 Apr 21 10:47:16.534041 extend-filesystems[1453]: Found loop4 Apr 21 10:47:16.534041 extend-filesystems[1453]: Found loop5 Apr 21 10:47:16.534041 extend-filesystems[1453]: Found sr0 Apr 21 10:47:16.534041 extend-filesystems[1453]: Found vda Apr 21 10:47:16.534041 extend-filesystems[1453]: Found vda1 Apr 21 10:47:16.534041 extend-filesystems[1453]: Found vda2 Apr 21 10:47:16.534041 extend-filesystems[1453]: Found vda3 Apr 21 10:47:16.534041 extend-filesystems[1453]: Found usr Apr 21 10:47:16.534041 extend-filesystems[1453]: Found vda4 Apr 21 10:47:16.534041 extend-filesystems[1453]: Found vda6 Apr 21 10:47:16.534041 extend-filesystems[1453]: Found vda7 Apr 21 10:47:16.534041 extend-filesystems[1453]: Found vda9 Apr 21 10:47:16.534041 extend-filesystems[1453]: Checking size of /dev/vda9 Apr 21 10:47:16.576625 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 10:47:16.576652 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1351) Apr 21 10:47:16.535531 dbus-daemon[1451]: [system] SELinux support is enabled Apr 21 10:47:16.576828 extend-filesystems[1453]: Resized partition /dev/vda9 Apr 21 10:47:16.534910 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:47:16.578920 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:47:16.539241 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:47:16.539502 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:47:16.581822 update_engine[1469]: I20260421 10:47:16.562415 1469 main.cc:92] Flatcar Update Engine starting Apr 21 10:47:16.581822 update_engine[1469]: I20260421 10:47:16.566825 1469 update_check_scheduler.cc:74] Next update check in 8m52s Apr 21 10:47:16.541113 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:47:16.591689 jq[1472]: true Apr 21 10:47:16.548708 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:47:16.552206 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:47:16.563401 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:47:16.563536 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:47:16.563713 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:47:16.563832 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:47:16.573163 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:47:16.573293 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:47:16.591276 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:47:16.595762 jq[1477]: true Apr 21 10:47:16.599484 tar[1476]: linux-amd64/LICENSE Apr 21 10:47:16.599626 tar[1476]: linux-amd64/helm Apr 21 10:47:16.601435 systemd-logind[1466]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:47:16.601472 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:47:16.604657 systemd-logind[1466]: New seat seat0. Apr 21 10:47:16.616218 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:47:16.621986 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:47:16.625601 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 10:47:16.629654 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:47:16.629804 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:47:16.632541 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:47:16.632624 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:47:16.640757 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 10:47:16.640757 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 10:47:16.640757 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 10:47:16.653993 extend-filesystems[1453]: Resized filesystem in /dev/vda9 Apr 21 10:47:16.642403 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:47:16.647720 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:47:16.647923 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:47:16.658122 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:47:16.659701 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:47:16.662960 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 10:47:16.686151 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:47:16.687576 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:47:16.706557 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:47:16.714543 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:47:16.719589 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:47:16.719760 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:47:16.727113 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:47:16.734498 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:47:16.738455 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:47:16.744261 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:47:16.746717 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:47:16.754898 containerd[1480]: time="2026-04-21T10:47:16.754731106Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:47:16.771982 containerd[1480]: time="2026-04-21T10:47:16.771632974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:47:16.773397 containerd[1480]: time="2026-04-21T10:47:16.773355704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:47:16.773397 containerd[1480]: time="2026-04-21T10:47:16.773395297Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:47:16.773441 containerd[1480]: time="2026-04-21T10:47:16.773408339Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:47:16.773659 containerd[1480]: time="2026-04-21T10:47:16.773515349Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:47:16.773659 containerd[1480]: time="2026-04-21T10:47:16.773528370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:47:16.773659 containerd[1480]: time="2026-04-21T10:47:16.773562057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:47:16.773659 containerd[1480]: time="2026-04-21T10:47:16.773571164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:47:16.773714 containerd[1480]: time="2026-04-21T10:47:16.773701269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:47:16.773728 containerd[1480]: time="2026-04-21T10:47:16.773712162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:47:16.773728 containerd[1480]: time="2026-04-21T10:47:16.773721441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:47:16.773752 containerd[1480]: time="2026-04-21T10:47:16.773728269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:47:16.774084 containerd[1480]: time="2026-04-21T10:47:16.773784318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:47:16.774084 containerd[1480]: time="2026-04-21T10:47:16.773963172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:47:16.774084 containerd[1480]: time="2026-04-21T10:47:16.774077696Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:47:16.774128 containerd[1480]: time="2026-04-21T10:47:16.774088069Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:47:16.774146 containerd[1480]: time="2026-04-21T10:47:16.774139575Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:47:16.774208 containerd[1480]: time="2026-04-21T10:47:16.774166372Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:47:16.778638 containerd[1480]: time="2026-04-21T10:47:16.778594438Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:47:16.778677 containerd[1480]: time="2026-04-21T10:47:16.778656008Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:47:16.778677 containerd[1480]: time="2026-04-21T10:47:16.778670171Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:47:16.778714 containerd[1480]: time="2026-04-21T10:47:16.778681489Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:47:16.778714 containerd[1480]: time="2026-04-21T10:47:16.778693417Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:47:16.778812 containerd[1480]: time="2026-04-21T10:47:16.778780694Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779025966Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779106261Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779116655Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779125253Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779134332Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779143377Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779151732Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779161698Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779171529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779182471Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779191752Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779201756Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779216173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779611 containerd[1480]: time="2026-04-21T10:47:16.779225514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779233785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779241692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779252052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779261633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779269651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779279355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779288707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779298356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779306209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779314272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779322326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779333853Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779347636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779355667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.779794 containerd[1480]: time="2026-04-21T10:47:16.779365787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:47:16.780041 containerd[1480]: time="2026-04-21T10:47:16.779397603Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:47:16.780041 containerd[1480]: time="2026-04-21T10:47:16.779409966Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:47:16.780041 containerd[1480]: time="2026-04-21T10:47:16.779418713Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:47:16.780041 containerd[1480]: time="2026-04-21T10:47:16.779426771Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:47:16.780041 containerd[1480]: time="2026-04-21T10:47:16.779433359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.780041 containerd[1480]: time="2026-04-21T10:47:16.779441568Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:47:16.780041 containerd[1480]: time="2026-04-21T10:47:16.779450888Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:47:16.780041 containerd[1480]: time="2026-04-21T10:47:16.779459387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:47:16.780141 containerd[1480]: time="2026-04-21T10:47:16.779643900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:47:16.780141 containerd[1480]: time="2026-04-21T10:47:16.779680300Z" level=info msg="Connect containerd service" Apr 21 10:47:16.780141 containerd[1480]: time="2026-04-21T10:47:16.779705335Z" level=info msg="using legacy CRI server" Apr 21 10:47:16.780141 containerd[1480]: time="2026-04-21T10:47:16.779710243Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:47:16.780141 containerd[1480]: time="2026-04-21T10:47:16.779777202Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:47:16.780306 containerd[1480]: time="2026-04-21T10:47:16.780251649Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:47:16.780917 containerd[1480]: time="2026-04-21T10:47:16.780454664Z" level=info msg="Start subscribing containerd event" Apr 21 10:47:16.780917 containerd[1480]: time="2026-04-21T10:47:16.780527689Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:47:16.780917 containerd[1480]: time="2026-04-21T10:47:16.780532659Z" level=info msg="Start recovering state" Apr 21 10:47:16.780917 containerd[1480]: time="2026-04-21T10:47:16.780582838Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:47:16.780917 containerd[1480]: time="2026-04-21T10:47:16.780613026Z" level=info msg="Start event monitor" Apr 21 10:47:16.780917 containerd[1480]: time="2026-04-21T10:47:16.780623455Z" level=info msg="Start snapshots syncer" Apr 21 10:47:16.780917 containerd[1480]: time="2026-04-21T10:47:16.780630821Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:47:16.780917 containerd[1480]: time="2026-04-21T10:47:16.780637110Z" level=info msg="Start streaming server" Apr 21 10:47:16.780917 containerd[1480]: time="2026-04-21T10:47:16.780715077Z" level=info msg="containerd successfully booted in 0.027953s" Apr 21 10:47:16.780955 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:47:17.031371 tar[1476]: linux-amd64/README.md Apr 21 10:47:17.049619 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:47:18.120231 systemd-networkd[1404]: eth0: Gained IPv6LL Apr 21 10:47:18.122576 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:47:18.125374 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:47:18.137143 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 10:47:18.140916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:47:18.143741 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:47:18.157126 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 10:47:18.157279 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 10:47:18.160038 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:47:18.163143 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:47:18.785285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:47:18.787789 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:47:18.790002 systemd[1]: Startup finished in 1.081s (kernel) + 7.514s (initrd) + 4.041s (userspace) = 12.638s. Apr 21 10:47:18.792957 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:47:19.215404 kubelet[1562]: E0421 10:47:19.215236 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:47:19.217493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:47:19.217619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:47:20.473629 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:47:20.474645 systemd[1]: Started sshd@0-10.0.0.157:22-10.0.0.1:33340.service - OpenSSH per-connection server daemon (10.0.0.1:33340). Apr 21 10:47:20.517425 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 33340 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:47:20.519129 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:47:20.526446 systemd-logind[1466]: New session 1 of user core. Apr 21 10:47:20.527270 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:47:20.536126 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:47:20.545711 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:47:20.547559 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:47:20.553356 (systemd)[1580]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:47:20.627750 systemd[1580]: Queued start job for default target default.target. Apr 21 10:47:20.639989 systemd[1580]: Created slice app.slice - User Application Slice. Apr 21 10:47:20.640056 systemd[1580]: Reached target paths.target - Paths. Apr 21 10:47:20.640068 systemd[1580]: Reached target timers.target - Timers. Apr 21 10:47:20.641245 systemd[1580]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:47:20.650136 systemd[1580]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:47:20.650201 systemd[1580]: Reached target sockets.target - Sockets. Apr 21 10:47:20.650210 systemd[1580]: Reached target basic.target - Basic System. Apr 21 10:47:20.650235 systemd[1580]: Reached target default.target - Main User Target. Apr 21 10:47:20.650254 systemd[1580]: Startup finished in 91ms. Apr 21 10:47:20.650515 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:47:20.651783 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:47:20.714129 systemd[1]: Started sshd@1-10.0.0.157:22-10.0.0.1:33342.service - OpenSSH per-connection server daemon (10.0.0.1:33342). Apr 21 10:47:20.744097 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 33342 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:47:20.745145 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:47:20.748380 systemd-logind[1466]: New session 2 of user core. Apr 21 10:47:20.758043 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:47:20.809788 sshd[1591]: pam_unix(sshd:session): session closed for user core Apr 21 10:47:20.817753 systemd[1]: sshd@1-10.0.0.157:22-10.0.0.1:33342.service: Deactivated successfully. Apr 21 10:47:20.818821 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:47:20.819786 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:47:20.820709 systemd[1]: Started sshd@2-10.0.0.157:22-10.0.0.1:33348.service - OpenSSH per-connection server daemon (10.0.0.1:33348). Apr 21 10:47:20.821342 systemd-logind[1466]: Removed session 2. Apr 21 10:47:20.851423 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 33348 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:47:20.852410 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:47:20.855807 systemd-logind[1466]: New session 3 of user core. Apr 21 10:47:20.865043 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:47:20.912145 sshd[1598]: pam_unix(sshd:session): session closed for user core Apr 21 10:47:20.927009 systemd[1]: sshd@2-10.0.0.157:22-10.0.0.1:33348.service: Deactivated successfully. Apr 21 10:47:20.928139 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:47:20.929086 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:47:20.929920 systemd[1]: Started sshd@3-10.0.0.157:22-10.0.0.1:33358.service - OpenSSH per-connection server daemon (10.0.0.1:33358). Apr 21 10:47:20.930535 systemd-logind[1466]: Removed session 3. Apr 21 10:47:20.961179 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 33358 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:47:20.962123 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:47:20.965304 systemd-logind[1466]: New session 4 of user core. Apr 21 10:47:20.975052 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:47:21.025642 sshd[1605]: pam_unix(sshd:session): session closed for user core Apr 21 10:47:21.040965 systemd[1]: sshd@3-10.0.0.157:22-10.0.0.1:33358.service: Deactivated successfully. Apr 21 10:47:21.042113 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:47:21.043100 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:47:21.053140 systemd[1]: Started sshd@4-10.0.0.157:22-10.0.0.1:33368.service - OpenSSH per-connection server daemon (10.0.0.1:33368). Apr 21 10:47:21.053932 systemd-logind[1466]: Removed session 4. Apr 21 10:47:21.081237 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 33368 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:47:21.082266 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:47:21.085586 systemd-logind[1466]: New session 5 of user core. Apr 21 10:47:21.094002 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:47:21.150119 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:47:21.150327 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:47:21.167830 sudo[1615]: pam_unix(sudo:session): session closed for user root Apr 21 10:47:21.169280 sshd[1612]: pam_unix(sshd:session): session closed for user core Apr 21 10:47:21.188066 systemd[1]: sshd@4-10.0.0.157:22-10.0.0.1:33368.service: Deactivated successfully. Apr 21 10:47:21.189214 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:47:21.190198 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:47:21.191106 systemd[1]: Started sshd@5-10.0.0.157:22-10.0.0.1:33382.service - OpenSSH per-connection server daemon (10.0.0.1:33382). Apr 21 10:47:21.191672 systemd-logind[1466]: Removed session 5. Apr 21 10:47:21.221906 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 33382 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:47:21.223012 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:47:21.226265 systemd-logind[1466]: New session 6 of user core. Apr 21 10:47:21.232059 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:47:21.282622 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:47:21.282836 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:47:21.286342 sudo[1624]: pam_unix(sudo:session): session closed for user root Apr 21 10:47:21.290334 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:47:21.290540 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:47:21.303148 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:47:21.304448 auditctl[1627]: No rules Apr 21 10:47:21.304759 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:47:21.304960 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:47:21.306691 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:47:21.329405 augenrules[1645]: No rules Apr 21 10:47:21.330387 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:47:21.331064 sudo[1623]: pam_unix(sudo:session): session closed for user root Apr 21 10:47:21.332355 sshd[1620]: pam_unix(sshd:session): session closed for user core Apr 21 10:47:21.337469 systemd[1]: sshd@5-10.0.0.157:22-10.0.0.1:33382.service: Deactivated successfully. Apr 21 10:47:21.338471 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:47:21.339425 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:47:21.340308 systemd[1]: Started sshd@6-10.0.0.157:22-10.0.0.1:33390.service - OpenSSH per-connection server daemon (10.0.0.1:33390). Apr 21 10:47:21.340948 systemd-logind[1466]: Removed session 6. Apr 21 10:47:21.370623 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 33390 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:47:21.371676 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:47:21.375053 systemd-logind[1466]: New session 7 of user core. Apr 21 10:47:21.387070 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:47:21.438343 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:47:21.438562 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:47:21.658138 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:47:21.658198 (dockerd)[1675]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:47:21.885769 dockerd[1675]: time="2026-04-21T10:47:21.885677158Z" level=info msg="Starting up" Apr 21 10:47:22.059954 dockerd[1675]: time="2026-04-21T10:47:22.059799903Z" level=info msg="Loading containers: start." Apr 21 10:47:22.158898 kernel: Initializing XFRM netlink socket Apr 21 10:47:22.232440 systemd-networkd[1404]: docker0: Link UP Apr 21 10:47:22.255410 dockerd[1675]: time="2026-04-21T10:47:22.255344859Z" level=info msg="Loading containers: done." Apr 21 10:47:22.267016 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4266367868-merged.mount: Deactivated successfully. Apr 21 10:47:22.268835 dockerd[1675]: time="2026-04-21T10:47:22.268777128Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:47:22.268950 dockerd[1675]: time="2026-04-21T10:47:22.268924996Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:47:22.269056 dockerd[1675]: time="2026-04-21T10:47:22.269010256Z" level=info msg="Daemon has completed initialization" Apr 21 10:47:22.300274 dockerd[1675]: time="2026-04-21T10:47:22.300195804Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:47:22.300923 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:47:22.693776 containerd[1480]: time="2026-04-21T10:47:22.693702198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:47:23.150178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466346795.mount: Deactivated successfully. Apr 21 10:47:23.799134 containerd[1480]: time="2026-04-21T10:47:23.799083643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:23.799893 containerd[1480]: time="2026-04-21T10:47:23.799817417Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 21 10:47:23.800956 containerd[1480]: time="2026-04-21T10:47:23.800914193Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:23.803290 containerd[1480]: time="2026-04-21T10:47:23.803234995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:23.804247 containerd[1480]: time="2026-04-21T10:47:23.804214187Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.110469682s" Apr 21 10:47:23.804318 containerd[1480]: time="2026-04-21T10:47:23.804250512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:47:23.804903 containerd[1480]: time="2026-04-21T10:47:23.804881861Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:47:25.296842 containerd[1480]: time="2026-04-21T10:47:25.296753946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:25.298593 containerd[1480]: time="2026-04-21T10:47:25.298515437Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 21 10:47:25.299831 containerd[1480]: time="2026-04-21T10:47:25.299780173Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:25.305522 containerd[1480]: time="2026-04-21T10:47:25.305453700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:25.308207 containerd[1480]: time="2026-04-21T10:47:25.308003764Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.503089285s" Apr 21 10:47:25.308207 containerd[1480]: time="2026-04-21T10:47:25.308180930Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:47:25.309077 containerd[1480]: time="2026-04-21T10:47:25.309018478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:47:26.892125 containerd[1480]: time="2026-04-21T10:47:26.890471070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:26.892125 containerd[1480]: time="2026-04-21T10:47:26.890876027Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 21 10:47:26.892569 containerd[1480]: time="2026-04-21T10:47:26.892507180Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:26.896959 containerd[1480]: time="2026-04-21T10:47:26.896814910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:26.900559 containerd[1480]: time="2026-04-21T10:47:26.900465666Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.591373632s" Apr 21 10:47:26.900559 containerd[1480]: time="2026-04-21T10:47:26.900535269Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:47:26.901396 containerd[1480]: time="2026-04-21T10:47:26.901292549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:47:27.747559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2720420441.mount: Deactivated successfully. Apr 21 10:47:28.175607 containerd[1480]: time="2026-04-21T10:47:28.175484643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:28.176230 containerd[1480]: time="2026-04-21T10:47:28.176150486Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 21 10:47:28.177162 containerd[1480]: time="2026-04-21T10:47:28.177093422Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:28.179920 containerd[1480]: time="2026-04-21T10:47:28.179785097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:28.181092 containerd[1480]: time="2026-04-21T10:47:28.181012708Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.279662839s" Apr 21 10:47:28.181092 containerd[1480]: time="2026-04-21T10:47:28.181095417Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:47:28.181806 containerd[1480]: time="2026-04-21T10:47:28.181761415Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:47:28.616125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876907326.mount: Deactivated successfully. Apr 21 10:47:29.321159 containerd[1480]: time="2026-04-21T10:47:29.321092898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:29.321773 containerd[1480]: time="2026-04-21T10:47:29.321732557Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 21 10:47:29.323182 containerd[1480]: time="2026-04-21T10:47:29.323141449Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:29.325538 containerd[1480]: time="2026-04-21T10:47:29.325492861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:29.326447 containerd[1480]: time="2026-04-21T10:47:29.326414602Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.144615399s" Apr 21 10:47:29.326447 containerd[1480]: time="2026-04-21T10:47:29.326444888Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:47:29.327142 containerd[1480]: time="2026-04-21T10:47:29.327083771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:47:29.421543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:47:29.432092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:47:29.536417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:47:29.540317 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:47:29.580229 kubelet[1957]: E0421 10:47:29.580026 1957 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:47:29.583461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:47:29.583596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:47:29.726690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1998656737.mount: Deactivated successfully. Apr 21 10:47:29.731928 containerd[1480]: time="2026-04-21T10:47:29.731833263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:29.732664 containerd[1480]: time="2026-04-21T10:47:29.732548307Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 21 10:47:29.733522 containerd[1480]: time="2026-04-21T10:47:29.733460609Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:29.735207 containerd[1480]: time="2026-04-21T10:47:29.735135308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:29.735638 containerd[1480]: time="2026-04-21T10:47:29.735606624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 408.489261ms" Apr 21 10:47:29.735638 containerd[1480]: time="2026-04-21T10:47:29.735635838Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:47:29.736275 containerd[1480]: time="2026-04-21T10:47:29.736122581Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:47:30.145797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310785164.mount: Deactivated successfully. Apr 21 10:47:30.785995 containerd[1480]: time="2026-04-21T10:47:30.785938506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:30.786634 containerd[1480]: time="2026-04-21T10:47:30.786609701Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 21 10:47:30.787364 containerd[1480]: time="2026-04-21T10:47:30.787331411Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:30.790110 containerd[1480]: time="2026-04-21T10:47:30.790027617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:30.792261 containerd[1480]: time="2026-04-21T10:47:30.792219206Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.056072034s" Apr 21 10:47:30.792306 containerd[1480]: time="2026-04-21T10:47:30.792261608Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:47:32.919148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:47:32.937153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:47:32.955441 systemd[1]: Reloading requested from client PID 2064 ('systemctl') (unit session-7.scope)... Apr 21 10:47:32.955467 systemd[1]: Reloading... Apr 21 10:47:33.009971 zram_generator::config[2106]: No configuration found. Apr 21 10:47:33.081674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:47:33.134361 systemd[1]: Reloading finished in 178 ms. Apr 21 10:47:33.175624 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:47:33.177719 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:47:33.177901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:47:33.179017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:47:33.281457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:47:33.284932 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:47:33.320474 kubelet[2153]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:47:33.320474 kubelet[2153]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:47:33.320474 kubelet[2153]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:47:33.320790 kubelet[2153]: I0421 10:47:33.320490 2153 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:47:33.855592 kubelet[2153]: I0421 10:47:33.855547 2153 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:47:33.855592 kubelet[2153]: I0421 10:47:33.855577 2153 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:47:33.855811 kubelet[2153]: I0421 10:47:33.855779 2153 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:47:33.875761 kubelet[2153]: E0421 10:47:33.875703 2153 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.157:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.157:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:47:33.875996 kubelet[2153]: I0421 10:47:33.875971 2153 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:47:33.879065 kubelet[2153]: E0421 10:47:33.879033 2153 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:47:33.879065 kubelet[2153]: I0421 10:47:33.879060 2153 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:47:33.882695 kubelet[2153]: I0421 10:47:33.882655 2153 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:47:33.882919 kubelet[2153]: I0421 10:47:33.882841 2153 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:47:33.883027 kubelet[2153]: I0421 10:47:33.882901 2153 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:47:33.883027 kubelet[2153]: I0421 10:47:33.883025 2153 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:47:33.883159 kubelet[2153]: I0421 10:47:33.883032 2153 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:47:33.883159 kubelet[2153]: I0421 10:47:33.883136 2153 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:47:33.886479 kubelet[2153]: I0421 10:47:33.886440 2153 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:47:33.886479 kubelet[2153]: I0421 10:47:33.886467 2153 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:47:33.886520 kubelet[2153]: I0421 10:47:33.886505 2153 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:47:33.886537 kubelet[2153]: I0421 10:47:33.886528 2153 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:47:33.891269 kubelet[2153]: I0421 10:47:33.891232 2153 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:47:33.891902 kubelet[2153]: I0421 10:47:33.891720 2153 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:47:33.892277 kubelet[2153]: E0421 10:47:33.892221 2153 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.157:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:47:33.892471 kubelet[2153]: E0421 10:47:33.892434 2153 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.157:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:47:33.893356 kubelet[2153]: W0421 10:47:33.893068 2153 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:47:33.897546 kubelet[2153]: I0421 10:47:33.897515 2153 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:47:33.897592 kubelet[2153]: I0421 10:47:33.897563 2153 server.go:1289] "Started kubelet" Apr 21 10:47:33.898248 kubelet[2153]: I0421 10:47:33.898131 2153 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:47:33.898385 kubelet[2153]: I0421 10:47:33.898356 2153 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:47:33.898417 kubelet[2153]: I0421 10:47:33.898406 2153 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:47:33.898543 kubelet[2153]: I0421 10:47:33.898478 2153 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:47:33.899184 kubelet[2153]: I0421 10:47:33.899148 2153 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:47:33.900688 kubelet[2153]: I0421 10:47:33.899711 2153 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:47:33.901183 kubelet[2153]: E0421 10:47:33.901161 2153 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 10:47:33.901272 kubelet[2153]: I0421 10:47:33.901217 2153 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:47:33.902034 kubelet[2153]: I0421 10:47:33.901362 2153 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:47:33.902034 kubelet[2153]: I0421 10:47:33.901439 2153 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:47:33.902034 kubelet[2153]: E0421 10:47:33.901016 2153 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.157:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.157:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8597ebeaec283 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 10:47:33.897536131 +0000 UTC m=+0.609700866,LastTimestamp:2026-04-21 10:47:33.897536131 +0000 UTC m=+0.609700866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 10:47:33.902034 kubelet[2153]: E0421 10:47:33.902005 2153 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.157:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:47:33.902220 kubelet[2153]: E0421 10:47:33.902183 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="200ms" Apr 21 10:47:33.902899 kubelet[2153]: E0421 10:47:33.902880 2153 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:47:33.903627 kubelet[2153]: I0421 10:47:33.903392 2153 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:47:33.903627 kubelet[2153]: I0421 10:47:33.903466 2153 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:47:33.904243 kubelet[2153]: I0421 10:47:33.904226 2153 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:47:33.914368 kubelet[2153]: I0421 10:47:33.914350 2153 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:47:33.914368 kubelet[2153]: I0421 10:47:33.914364 2153 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:47:33.914422 kubelet[2153]: I0421 10:47:33.914375 2153 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:47:33.917144 kubelet[2153]: I0421 10:47:33.917114 2153 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:47:33.918091 kubelet[2153]: I0421 10:47:33.918053 2153 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:47:33.918091 kubelet[2153]: I0421 10:47:33.918093 2153 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:47:33.918148 kubelet[2153]: I0421 10:47:33.918124 2153 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:47:33.918148 kubelet[2153]: I0421 10:47:33.918130 2153 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:47:33.918197 kubelet[2153]: E0421 10:47:33.918155 2153 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:47:33.949086 kubelet[2153]: I0421 10:47:33.949007 2153 policy_none.go:49] "None policy: Start" Apr 21 10:47:33.949086 kubelet[2153]: I0421 10:47:33.949057 2153 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:47:33.949207 kubelet[2153]: I0421 10:47:33.949097 2153 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:47:33.949502 kubelet[2153]: E0421 10:47:33.949405 2153 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.157:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:47:33.954889 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:47:33.964269 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:47:33.966345 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:47:33.973601 kubelet[2153]: E0421 10:47:33.973490 2153 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:47:33.973657 kubelet[2153]: I0421 10:47:33.973632 2153 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:47:33.973674 kubelet[2153]: I0421 10:47:33.973641 2153 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:47:33.974173 kubelet[2153]: I0421 10:47:33.973810 2153 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:47:33.974671 kubelet[2153]: E0421 10:47:33.974599 2153 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:47:33.974671 kubelet[2153]: E0421 10:47:33.974659 2153 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 10:47:34.027590 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 21 10:47:34.039490 kubelet[2153]: E0421 10:47:34.039433 2153 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:47:34.041465 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 21 10:47:34.051703 kubelet[2153]: E0421 10:47:34.051664 2153 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:47:34.053436 systemd[1]: Created slice kubepods-burstable-pod4c8204d431b9497d3072717126b4438a.slice - libcontainer container kubepods-burstable-pod4c8204d431b9497d3072717126b4438a.slice. Apr 21 10:47:34.054522 kubelet[2153]: E0421 10:47:34.054486 2153 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:47:34.075809 kubelet[2153]: I0421 10:47:34.075784 2153 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:47:34.076114 kubelet[2153]: E0421 10:47:34.076066 2153 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Apr 21 10:47:34.102581 kubelet[2153]: I0421 10:47:34.102518 2153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:34.102640 kubelet[2153]: E0421 10:47:34.102614 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="400ms" Apr 21 10:47:34.203533 kubelet[2153]: I0421 10:47:34.203461 2153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:34.203533 kubelet[2153]: I0421 10:47:34.203530 2153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c8204d431b9497d3072717126b4438a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c8204d431b9497d3072717126b4438a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:34.203699 kubelet[2153]: I0421 10:47:34.203549 2153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c8204d431b9497d3072717126b4438a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c8204d431b9497d3072717126b4438a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:34.203699 kubelet[2153]: I0421 10:47:34.203567 2153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c8204d431b9497d3072717126b4438a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c8204d431b9497d3072717126b4438a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:34.203699 kubelet[2153]: I0421 10:47:34.203610 2153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:34.203699 kubelet[2153]: I0421 10:47:34.203624 2153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:34.203699 kubelet[2153]: I0421 10:47:34.203640 2153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:34.203795 kubelet[2153]: I0421 10:47:34.203686 2153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:47:34.278001 kubelet[2153]: I0421 10:47:34.277971 2153 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:47:34.278392 kubelet[2153]: E0421 10:47:34.278302 2153 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Apr 21 10:47:34.340990 kubelet[2153]: E0421 10:47:34.340943 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:34.341760 containerd[1480]: time="2026-04-21T10:47:34.341706009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 21 10:47:34.352445 kubelet[2153]: E0421 10:47:34.352385 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:34.352817 containerd[1480]: time="2026-04-21T10:47:34.352774440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 21 10:47:34.355459 kubelet[2153]: E0421 10:47:34.355425 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:34.355924 containerd[1480]: time="2026-04-21T10:47:34.355898842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c8204d431b9497d3072717126b4438a,Namespace:kube-system,Attempt:0,}" Apr 21 10:47:34.503798 kubelet[2153]: E0421 10:47:34.503660 2153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="800ms" Apr 21 10:47:34.664793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3832662121.mount: Deactivated successfully. Apr 21 10:47:34.668513 containerd[1480]: time="2026-04-21T10:47:34.668459225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:47:34.669099 containerd[1480]: time="2026-04-21T10:47:34.669002381Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 21 10:47:34.671135 containerd[1480]: time="2026-04-21T10:47:34.671101021Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:47:34.671968 containerd[1480]: time="2026-04-21T10:47:34.671940962Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:47:34.672459 containerd[1480]: time="2026-04-21T10:47:34.672427787Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:47:34.673440 containerd[1480]: time="2026-04-21T10:47:34.672908815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:47:34.673896 containerd[1480]: time="2026-04-21T10:47:34.673865025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:47:34.674712 containerd[1480]: time="2026-04-21T10:47:34.674685792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:47:34.675250 containerd[1480]: time="2026-04-21T10:47:34.675211614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 322.381552ms" Apr 21 10:47:34.677585 containerd[1480]: time="2026-04-21T10:47:34.677543306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 321.586463ms" Apr 21 10:47:34.678018 containerd[1480]: time="2026-04-21T10:47:34.677995417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 336.215502ms" Apr 21 10:47:34.679776 kubelet[2153]: I0421 10:47:34.679729 2153 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:47:34.680069 kubelet[2153]: E0421 10:47:34.680008 2153 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Apr 21 10:47:34.772056 containerd[1480]: time="2026-04-21T10:47:34.771902456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:47:34.772056 containerd[1480]: time="2026-04-21T10:47:34.771950947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:47:34.772056 containerd[1480]: time="2026-04-21T10:47:34.771959904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:34.773013 containerd[1480]: time="2026-04-21T10:47:34.772953293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:47:34.773013 containerd[1480]: time="2026-04-21T10:47:34.772983211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:47:34.773013 containerd[1480]: time="2026-04-21T10:47:34.772994946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:34.773013 containerd[1480]: time="2026-04-21T10:47:34.772020206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:34.773161 containerd[1480]: time="2026-04-21T10:47:34.773039139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:34.773238 containerd[1480]: time="2026-04-21T10:47:34.773157643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:47:34.773238 containerd[1480]: time="2026-04-21T10:47:34.773204455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:47:34.773238 containerd[1480]: time="2026-04-21T10:47:34.773216144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:34.773346 containerd[1480]: time="2026-04-21T10:47:34.773268755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:34.795035 systemd[1]: Started cri-containerd-00f6914bed50f7df6d6b67f2c7d9f71903ef01edd36c7c85f61873f82721707e.scope - libcontainer container 00f6914bed50f7df6d6b67f2c7d9f71903ef01edd36c7c85f61873f82721707e. Apr 21 10:47:34.798408 systemd[1]: Started cri-containerd-256e7da35d6dda6d184c30ad321933b4b9f8c5571a80745e933b121687d9e0fd.scope - libcontainer container 256e7da35d6dda6d184c30ad321933b4b9f8c5571a80745e933b121687d9e0fd. Apr 21 10:47:34.799468 systemd[1]: Started cri-containerd-9a74e8fe8ade8607bfa58e8d0bad3d10e5e5b2ffb36e21a6fa5476f2fd4f312b.scope - libcontainer container 9a74e8fe8ade8607bfa58e8d0bad3d10e5e5b2ffb36e21a6fa5476f2fd4f312b. Apr 21 10:47:34.828761 containerd[1480]: time="2026-04-21T10:47:34.828703372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"00f6914bed50f7df6d6b67f2c7d9f71903ef01edd36c7c85f61873f82721707e\"" Apr 21 10:47:34.829728 kubelet[2153]: E0421 10:47:34.829709 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:34.834518 containerd[1480]: time="2026-04-21T10:47:34.834469366Z" level=info msg="CreateContainer within sandbox \"00f6914bed50f7df6d6b67f2c7d9f71903ef01edd36c7c85f61873f82721707e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:47:34.836300 containerd[1480]: time="2026-04-21T10:47:34.836263639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"256e7da35d6dda6d184c30ad321933b4b9f8c5571a80745e933b121687d9e0fd\"" Apr 21 10:47:34.838314 kubelet[2153]: E0421 10:47:34.836726 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:34.840022 containerd[1480]: time="2026-04-21T10:47:34.839999307Z" level=info msg="CreateContainer within sandbox \"256e7da35d6dda6d184c30ad321933b4b9f8c5571a80745e933b121687d9e0fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:47:34.842464 containerd[1480]: time="2026-04-21T10:47:34.842401118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c8204d431b9497d3072717126b4438a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a74e8fe8ade8607bfa58e8d0bad3d10e5e5b2ffb36e21a6fa5476f2fd4f312b\"" Apr 21 10:47:34.843023 kubelet[2153]: E0421 10:47:34.843002 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:34.847019 containerd[1480]: time="2026-04-21T10:47:34.846990922Z" level=info msg="CreateContainer within sandbox \"9a74e8fe8ade8607bfa58e8d0bad3d10e5e5b2ffb36e21a6fa5476f2fd4f312b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:47:34.851636 containerd[1480]: time="2026-04-21T10:47:34.851583836Z" level=info msg="CreateContainer within sandbox \"00f6914bed50f7df6d6b67f2c7d9f71903ef01edd36c7c85f61873f82721707e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bb51724e5bed9af1e54ba1b64f78ea94720f626276dcf164609ccfda4c89da62\"" Apr 21 10:47:34.852204 containerd[1480]: time="2026-04-21T10:47:34.852168073Z" level=info msg="StartContainer for \"bb51724e5bed9af1e54ba1b64f78ea94720f626276dcf164609ccfda4c89da62\"" Apr 21 10:47:34.856549 containerd[1480]: time="2026-04-21T10:47:34.856509942Z" level=info msg="CreateContainer within sandbox \"256e7da35d6dda6d184c30ad321933b4b9f8c5571a80745e933b121687d9e0fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4aff430e40ef3ea8afc16a1142ff899f76a7e6e2bc38a25bc5246ac5e5e26e94\"" Apr 21 10:47:34.857953 containerd[1480]: time="2026-04-21T10:47:34.857925586Z" level=info msg="StartContainer for \"4aff430e40ef3ea8afc16a1142ff899f76a7e6e2bc38a25bc5246ac5e5e26e94\"" Apr 21 10:47:34.863899 containerd[1480]: time="2026-04-21T10:47:34.863268608Z" level=info msg="CreateContainer within sandbox \"9a74e8fe8ade8607bfa58e8d0bad3d10e5e5b2ffb36e21a6fa5476f2fd4f312b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e180cf84ec96ce1d9033a5c7d4c06d0fa65f17cdada6467d2a2a179de975f636\"" Apr 21 10:47:34.863899 containerd[1480]: time="2026-04-21T10:47:34.863621600Z" level=info msg="StartContainer for \"e180cf84ec96ce1d9033a5c7d4c06d0fa65f17cdada6467d2a2a179de975f636\"" Apr 21 10:47:34.885022 systemd[1]: Started cri-containerd-4aff430e40ef3ea8afc16a1142ff899f76a7e6e2bc38a25bc5246ac5e5e26e94.scope - libcontainer container 4aff430e40ef3ea8afc16a1142ff899f76a7e6e2bc38a25bc5246ac5e5e26e94. Apr 21 10:47:34.885914 systemd[1]: Started cri-containerd-bb51724e5bed9af1e54ba1b64f78ea94720f626276dcf164609ccfda4c89da62.scope - libcontainer container bb51724e5bed9af1e54ba1b64f78ea94720f626276dcf164609ccfda4c89da62. Apr 21 10:47:34.888355 systemd[1]: Started cri-containerd-e180cf84ec96ce1d9033a5c7d4c06d0fa65f17cdada6467d2a2a179de975f636.scope - libcontainer container e180cf84ec96ce1d9033a5c7d4c06d0fa65f17cdada6467d2a2a179de975f636. Apr 21 10:47:34.926883 containerd[1480]: time="2026-04-21T10:47:34.926826837Z" level=info msg="StartContainer for \"bb51724e5bed9af1e54ba1b64f78ea94720f626276dcf164609ccfda4c89da62\" returns successfully" Apr 21 10:47:34.929994 containerd[1480]: time="2026-04-21T10:47:34.929891810Z" level=info msg="StartContainer for \"4aff430e40ef3ea8afc16a1142ff899f76a7e6e2bc38a25bc5246ac5e5e26e94\" returns successfully" Apr 21 10:47:34.937544 kubelet[2153]: E0421 10:47:34.937506 2153 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 10:47:34.937944 kubelet[2153]: E0421 10:47:34.937914 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:34.938867 containerd[1480]: time="2026-04-21T10:47:34.938020737Z" level=info msg="StartContainer for \"e180cf84ec96ce1d9033a5c7d4c06d0fa65f17cdada6467d2a2a179de975f636\" returns successfully" Apr 21 10:47:35.483151 kubelet[2153]: I0421 10:47:35.483073 2153 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:47:35.571700 kubelet[2153]: E0421 10:47:35.571644 2153 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 10:47:35.673182 kubelet[2153]: I0421 10:47:35.672563 2153 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 10:47:35.702366 kubelet[2153]: I0421 10:47:35.702318 2153 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:47:35.707302 kubelet[2153]: E0421 10:47:35.707266 2153 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 21 10:47:35.707302 kubelet[2153]: I0421 10:47:35.707295 2153 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:35.708443 kubelet[2153]: E0421 10:47:35.708398 2153 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:35.708443 kubelet[2153]: I0421 10:47:35.708425 2153 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:35.709523 kubelet[2153]: E0421 10:47:35.709485 2153 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:35.891492 kubelet[2153]: I0421 10:47:35.891368 2153 apiserver.go:52] "Watching apiserver" Apr 21 10:47:35.902478 kubelet[2153]: I0421 10:47:35.902421 2153 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:47:35.937412 kubelet[2153]: I0421 10:47:35.937291 2153 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:35.939260 kubelet[2153]: E0421 10:47:35.939236 2153 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:35.939503 kubelet[2153]: I0421 10:47:35.939460 2153 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:35.939670 kubelet[2153]: E0421 10:47:35.939600 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:35.941534 kubelet[2153]: E0421 10:47:35.941493 2153 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:35.941665 kubelet[2153]: E0421 10:47:35.941595 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:35.942059 kubelet[2153]: I0421 10:47:35.942045 2153 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:47:35.943167 kubelet[2153]: E0421 10:47:35.943134 2153 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 21 10:47:35.943269 kubelet[2153]: E0421 10:47:35.943240 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:36.943652 kubelet[2153]: I0421 10:47:36.943486 2153 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:47:36.943997 kubelet[2153]: I0421 10:47:36.943730 2153 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:36.945042 kubelet[2153]: I0421 10:47:36.944982 2153 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:36.947318 kubelet[2153]: E0421 10:47:36.947288 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:36.948907 kubelet[2153]: E0421 10:47:36.948884 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:36.949504 kubelet[2153]: E0421 10:47:36.949459 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:37.724068 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-7.scope)... Apr 21 10:47:37.724086 systemd[1]: Reloading... Apr 21 10:47:37.782147 zram_generator::config[2484]: No configuration found. Apr 21 10:47:37.864930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:47:37.918746 systemd[1]: Reloading finished in 194 ms. Apr 21 10:47:37.944973 kubelet[2153]: E0421 10:47:37.944919 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:37.945221 kubelet[2153]: E0421 10:47:37.945005 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:37.945221 kubelet[2153]: E0421 10:47:37.945206 2153 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:37.947432 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:47:37.963318 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:47:37.963506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:47:37.973143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:47:38.067157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:47:38.070558 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:47:38.100178 kubelet[2526]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:47:38.100178 kubelet[2526]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:47:38.100178 kubelet[2526]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:47:38.100486 kubelet[2526]: I0421 10:47:38.100202 2526 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:47:38.105603 kubelet[2526]: I0421 10:47:38.105560 2526 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:47:38.105603 kubelet[2526]: I0421 10:47:38.105589 2526 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:47:38.105782 kubelet[2526]: I0421 10:47:38.105759 2526 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:47:38.106721 kubelet[2526]: I0421 10:47:38.106700 2526 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:47:38.108456 kubelet[2526]: I0421 10:47:38.108421 2526 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:47:38.114063 kubelet[2526]: E0421 10:47:38.114030 2526 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:47:38.114063 kubelet[2526]: I0421 10:47:38.114060 2526 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:47:38.119790 kubelet[2526]: I0421 10:47:38.119775 2526 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:47:38.119989 kubelet[2526]: I0421 10:47:38.119956 2526 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:47:38.120121 kubelet[2526]: I0421 10:47:38.119981 2526 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:47:38.120121 kubelet[2526]: I0421 10:47:38.120113 2526 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:47:38.120121 kubelet[2526]: I0421 10:47:38.120120 2526 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:47:38.120233 kubelet[2526]: I0421 10:47:38.120152 2526 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:47:38.120290 kubelet[2526]: I0421 10:47:38.120265 2526 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:47:38.120290 kubelet[2526]: I0421 10:47:38.120282 2526 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:47:38.120322 kubelet[2526]: I0421 10:47:38.120299 2526 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:47:38.120322 kubelet[2526]: I0421 10:47:38.120309 2526 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:47:38.124967 kubelet[2526]: I0421 10:47:38.124909 2526 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:47:38.125306 kubelet[2526]: I0421 10:47:38.125275 2526 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:47:38.128929 kubelet[2526]: I0421 10:47:38.128907 2526 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:47:38.128929 kubelet[2526]: I0421 10:47:38.128943 2526 server.go:1289] "Started kubelet" Apr 21 10:47:38.129417 kubelet[2526]: I0421 10:47:38.129139 2526 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:47:38.130311 kubelet[2526]: I0421 10:47:38.130036 2526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:47:38.131386 kubelet[2526]: I0421 10:47:38.130692 2526 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:47:38.131386 kubelet[2526]: I0421 10:47:38.131271 2526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:47:38.131497 kubelet[2526]: I0421 10:47:38.131456 2526 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:47:38.132313 kubelet[2526]: I0421 10:47:38.132172 2526 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:47:38.132725 kubelet[2526]: I0421 10:47:38.132690 2526 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:47:38.132778 kubelet[2526]: I0421 10:47:38.132764 2526 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:47:38.132919 kubelet[2526]: I0421 10:47:38.132879 2526 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:47:38.135753 kubelet[2526]: E0421 10:47:38.135730 2526 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:47:38.136793 kubelet[2526]: I0421 10:47:38.136775 2526 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:47:38.136823 kubelet[2526]: I0421 10:47:38.136801 2526 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:47:38.136996 kubelet[2526]: I0421 10:47:38.136954 2526 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:47:38.144730 kubelet[2526]: I0421 10:47:38.144701 2526 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:47:38.146065 kubelet[2526]: I0421 10:47:38.145743 2526 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:47:38.146065 kubelet[2526]: I0421 10:47:38.145757 2526 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:47:38.146065 kubelet[2526]: I0421 10:47:38.145773 2526 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:47:38.146065 kubelet[2526]: I0421 10:47:38.145778 2526 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:47:38.146065 kubelet[2526]: E0421 10:47:38.145807 2526 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:47:38.167086 kubelet[2526]: I0421 10:47:38.167060 2526 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:47:38.167086 kubelet[2526]: I0421 10:47:38.167078 2526 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:47:38.167086 kubelet[2526]: I0421 10:47:38.167108 2526 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:47:38.167220 kubelet[2526]: I0421 10:47:38.167203 2526 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:47:38.167254 kubelet[2526]: I0421 10:47:38.167221 2526 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:47:38.167254 kubelet[2526]: I0421 10:47:38.167234 2526 policy_none.go:49] "None policy: Start" Apr 21 10:47:38.167254 kubelet[2526]: I0421 10:47:38.167242 2526 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:47:38.167254 kubelet[2526]: I0421 10:47:38.167249 2526 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:47:38.167330 kubelet[2526]: I0421 10:47:38.167316 2526 state_mem.go:75] "Updated machine memory state" Apr 21 10:47:38.170261 kubelet[2526]: E0421 10:47:38.170229 2526 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:47:38.170368 kubelet[2526]: I0421 10:47:38.170353 2526 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:47:38.170399 kubelet[2526]: I0421 10:47:38.170374 2526 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:47:38.170514 kubelet[2526]: I0421 10:47:38.170495 2526 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:47:38.172121 kubelet[2526]: E0421 10:47:38.172074 2526 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:47:38.247074 kubelet[2526]: I0421 10:47:38.246720 2526 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:38.247074 kubelet[2526]: I0421 10:47:38.246885 2526 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:38.247074 kubelet[2526]: I0421 10:47:38.246936 2526 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 10:47:38.253657 kubelet[2526]: E0421 10:47:38.253522 2526 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:38.253796 kubelet[2526]: E0421 10:47:38.253698 2526 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:38.253796 kubelet[2526]: E0421 10:47:38.253726 2526 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 10:47:38.274891 kubelet[2526]: I0421 10:47:38.274819 2526 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 21 10:47:38.282680 kubelet[2526]: I0421 10:47:38.282658 2526 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 21 10:47:38.282769 kubelet[2526]: I0421 10:47:38.282719 2526 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 21 10:47:38.434791 kubelet[2526]: I0421 10:47:38.434744 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:38.434791 kubelet[2526]: I0421 10:47:38.434816 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c8204d431b9497d3072717126b4438a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c8204d431b9497d3072717126b4438a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:38.435057 kubelet[2526]: I0421 10:47:38.434833 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:38.435057 kubelet[2526]: I0421 10:47:38.434870 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:38.435057 kubelet[2526]: I0421 10:47:38.434884 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:38.435057 kubelet[2526]: I0421 10:47:38.434898 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 21 10:47:38.435057 kubelet[2526]: I0421 10:47:38.434910 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c8204d431b9497d3072717126b4438a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c8204d431b9497d3072717126b4438a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:38.435162 kubelet[2526]: I0421 10:47:38.434924 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c8204d431b9497d3072717126b4438a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c8204d431b9497d3072717126b4438a\") " pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:38.435162 kubelet[2526]: I0421 10:47:38.434938 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 10:47:38.554562 kubelet[2526]: E0421 10:47:38.554484 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:38.554724 kubelet[2526]: E0421 10:47:38.554489 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:38.554724 kubelet[2526]: E0421 10:47:38.554500 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:39.121288 kubelet[2526]: I0421 10:47:39.121240 2526 apiserver.go:52] "Watching apiserver" Apr 21 10:47:39.132914 kubelet[2526]: I0421 10:47:39.132822 2526 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:47:39.154526 kubelet[2526]: I0421 10:47:39.154455 2526 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:39.154906 kubelet[2526]: E0421 10:47:39.154838 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:39.155387 kubelet[2526]: E0421 10:47:39.155349 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:39.162204 kubelet[2526]: E0421 10:47:39.162156 2526 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 10:47:39.162294 kubelet[2526]: E0421 10:47:39.162264 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:39.169702 kubelet[2526]: I0421 10:47:39.169638 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.169627085 podStartE2EDuration="3.169627085s" podCreationTimestamp="2026-04-21 10:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:47:39.162256298 +0000 UTC m=+1.088554193" watchObservedRunningTime="2026-04-21 10:47:39.169627085 +0000 UTC m=+1.095924979" Apr 21 10:47:39.179471 kubelet[2526]: I0421 10:47:39.179409 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.179393174 podStartE2EDuration="3.179393174s" podCreationTimestamp="2026-04-21 10:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:47:39.170683883 +0000 UTC m=+1.096981776" watchObservedRunningTime="2026-04-21 10:47:39.179393174 +0000 UTC m=+1.105691058" Apr 21 10:47:39.186819 kubelet[2526]: I0421 10:47:39.186769 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.186761369 podStartE2EDuration="3.186761369s" podCreationTimestamp="2026-04-21 10:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:47:39.179536996 +0000 UTC m=+1.105834879" watchObservedRunningTime="2026-04-21 10:47:39.186761369 +0000 UTC m=+1.113059263" Apr 21 10:47:40.156372 kubelet[2526]: E0421 10:47:40.156337 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:40.156737 kubelet[2526]: E0421 10:47:40.156420 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:41.157653 kubelet[2526]: E0421 10:47:41.157621 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:42.387182 kubelet[2526]: I0421 10:47:42.387130 2526 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:47:42.387814 containerd[1480]: time="2026-04-21T10:47:42.387741521Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:47:42.388035 kubelet[2526]: I0421 10:47:42.387985 2526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:47:43.588950 systemd[1]: Created slice kubepods-besteffort-pod060dc96a_0683_4ae8_bce6_44c27ae7b290.slice - libcontainer container kubepods-besteffort-pod060dc96a_0683_4ae8_bce6_44c27ae7b290.slice. Apr 21 10:47:43.672879 kubelet[2526]: I0421 10:47:43.672720 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/060dc96a-0683-4ae8-bce6-44c27ae7b290-xtables-lock\") pod \"kube-proxy-dzmwb\" (UID: \"060dc96a-0683-4ae8-bce6-44c27ae7b290\") " pod="kube-system/kube-proxy-dzmwb" Apr 21 10:47:43.672879 kubelet[2526]: I0421 10:47:43.672980 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/060dc96a-0683-4ae8-bce6-44c27ae7b290-lib-modules\") pod \"kube-proxy-dzmwb\" (UID: \"060dc96a-0683-4ae8-bce6-44c27ae7b290\") " pod="kube-system/kube-proxy-dzmwb" Apr 21 10:47:43.673708 kubelet[2526]: I0421 10:47:43.673200 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/060dc96a-0683-4ae8-bce6-44c27ae7b290-kube-proxy\") pod \"kube-proxy-dzmwb\" (UID: \"060dc96a-0683-4ae8-bce6-44c27ae7b290\") " pod="kube-system/kube-proxy-dzmwb" Apr 21 10:47:43.673708 kubelet[2526]: I0421 10:47:43.673273 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kknfp\" (UniqueName: \"kubernetes.io/projected/060dc96a-0683-4ae8-bce6-44c27ae7b290-kube-api-access-kknfp\") pod \"kube-proxy-dzmwb\" (UID: \"060dc96a-0683-4ae8-bce6-44c27ae7b290\") " pod="kube-system/kube-proxy-dzmwb" Apr 21 10:47:43.693605 systemd[1]: Created slice kubepods-besteffort-poda5f65e89_5487_486e_a278_cb24d2a42213.slice - libcontainer container kubepods-besteffort-poda5f65e89_5487_486e_a278_cb24d2a42213.slice. Apr 21 10:47:43.774410 kubelet[2526]: I0421 10:47:43.774363 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a5f65e89-5487-486e-a278-cb24d2a42213-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-jk8x9\" (UID: \"a5f65e89-5487-486e-a278-cb24d2a42213\") " pod="tigera-operator/tigera-operator-6bf85f8dd-jk8x9" Apr 21 10:47:43.774410 kubelet[2526]: I0421 10:47:43.774403 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qd2n\" (UniqueName: \"kubernetes.io/projected/a5f65e89-5487-486e-a278-cb24d2a42213-kube-api-access-6qd2n\") pod \"tigera-operator-6bf85f8dd-jk8x9\" (UID: \"a5f65e89-5487-486e-a278-cb24d2a42213\") " pod="tigera-operator/tigera-operator-6bf85f8dd-jk8x9" Apr 21 10:47:43.897085 kubelet[2526]: E0421 10:47:43.896971 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:43.897632 containerd[1480]: time="2026-04-21T10:47:43.897545667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dzmwb,Uid:060dc96a-0683-4ae8-bce6-44c27ae7b290,Namespace:kube-system,Attempt:0,}" Apr 21 10:47:43.918730 containerd[1480]: time="2026-04-21T10:47:43.918632907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:47:43.918730 containerd[1480]: time="2026-04-21T10:47:43.918677976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:47:43.918730 containerd[1480]: time="2026-04-21T10:47:43.918685911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:43.918962 containerd[1480]: time="2026-04-21T10:47:43.918757592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:43.935050 systemd[1]: Started cri-containerd-b270b08169e0eeccaaecf429a04d68c760d147baa301ad4acffca427d9fd55c5.scope - libcontainer container b270b08169e0eeccaaecf429a04d68c760d147baa301ad4acffca427d9fd55c5. Apr 21 10:47:43.949252 containerd[1480]: time="2026-04-21T10:47:43.949203586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dzmwb,Uid:060dc96a-0683-4ae8-bce6-44c27ae7b290,Namespace:kube-system,Attempt:0,} returns sandbox id \"b270b08169e0eeccaaecf429a04d68c760d147baa301ad4acffca427d9fd55c5\"" Apr 21 10:47:43.949870 kubelet[2526]: E0421 10:47:43.949822 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:43.954242 containerd[1480]: time="2026-04-21T10:47:43.954205694Z" level=info msg="CreateContainer within sandbox \"b270b08169e0eeccaaecf429a04d68c760d147baa301ad4acffca427d9fd55c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:47:43.965236 containerd[1480]: time="2026-04-21T10:47:43.965206087Z" level=info msg="CreateContainer within sandbox \"b270b08169e0eeccaaecf429a04d68c760d147baa301ad4acffca427d9fd55c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c8a085c622a49f157b7698e4bad21b0077ba66c49b10fc2fdb2a3cc4bb5c436\"" Apr 21 10:47:43.965657 containerd[1480]: time="2026-04-21T10:47:43.965616606Z" level=info msg="StartContainer for \"2c8a085c622a49f157b7698e4bad21b0077ba66c49b10fc2fdb2a3cc4bb5c436\"" Apr 21 10:47:43.997147 containerd[1480]: time="2026-04-21T10:47:43.997065249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-jk8x9,Uid:a5f65e89-5487-486e-a278-cb24d2a42213,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:47:44.003088 systemd[1]: Started cri-containerd-2c8a085c622a49f157b7698e4bad21b0077ba66c49b10fc2fdb2a3cc4bb5c436.scope - libcontainer container 2c8a085c622a49f157b7698e4bad21b0077ba66c49b10fc2fdb2a3cc4bb5c436. Apr 21 10:47:44.017944 containerd[1480]: time="2026-04-21T10:47:44.017661168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:47:44.017944 containerd[1480]: time="2026-04-21T10:47:44.017703920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:47:44.017944 containerd[1480]: time="2026-04-21T10:47:44.017712776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:44.018064 containerd[1480]: time="2026-04-21T10:47:44.017926219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:44.027865 containerd[1480]: time="2026-04-21T10:47:44.027767610Z" level=info msg="StartContainer for \"2c8a085c622a49f157b7698e4bad21b0077ba66c49b10fc2fdb2a3cc4bb5c436\" returns successfully" Apr 21 10:47:44.036009 systemd[1]: Started cri-containerd-a41f1c837ecf2a22b2e1dbbeb065da4b0a4b112ae23f1252145eec8c24449a4d.scope - libcontainer container a41f1c837ecf2a22b2e1dbbeb065da4b0a4b112ae23f1252145eec8c24449a4d. Apr 21 10:47:44.073474 containerd[1480]: time="2026-04-21T10:47:44.073390334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-jk8x9,Uid:a5f65e89-5487-486e-a278-cb24d2a42213,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a41f1c837ecf2a22b2e1dbbeb065da4b0a4b112ae23f1252145eec8c24449a4d\"" Apr 21 10:47:44.074920 containerd[1480]: time="2026-04-21T10:47:44.074896263Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:47:44.164545 kubelet[2526]: E0421 10:47:44.164313 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:44.172541 kubelet[2526]: I0421 10:47:44.172471 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dzmwb" podStartSLOduration=1.172459226 podStartE2EDuration="1.172459226s" podCreationTimestamp="2026-04-21 10:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:47:44.172270535 +0000 UTC m=+6.098568420" watchObservedRunningTime="2026-04-21 10:47:44.172459226 +0000 UTC m=+6.098757121" Apr 21 10:47:44.846350 kubelet[2526]: E0421 10:47:44.846227 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:45.168301 kubelet[2526]: E0421 10:47:45.167656 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:45.751440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294730890.mount: Deactivated successfully. Apr 21 10:47:46.170803 kubelet[2526]: E0421 10:47:46.170099 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:47.673356 containerd[1480]: time="2026-04-21T10:47:47.673272500Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:47.673893 containerd[1480]: time="2026-04-21T10:47:47.673854548Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:47:47.674742 containerd[1480]: time="2026-04-21T10:47:47.674640011Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:47.676628 containerd[1480]: time="2026-04-21T10:47:47.676550657Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:47:47.677469 containerd[1480]: time="2026-04-21T10:47:47.677447499Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.602520406s" Apr 21 10:47:47.677517 containerd[1480]: time="2026-04-21T10:47:47.677476369Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:47:47.681435 containerd[1480]: time="2026-04-21T10:47:47.681290770Z" level=info msg="CreateContainer within sandbox \"a41f1c837ecf2a22b2e1dbbeb065da4b0a4b112ae23f1252145eec8c24449a4d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:47:47.690838 containerd[1480]: time="2026-04-21T10:47:47.690784971Z" level=info msg="CreateContainer within sandbox \"a41f1c837ecf2a22b2e1dbbeb065da4b0a4b112ae23f1252145eec8c24449a4d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"47303cae798afe2e60a8e56edee2da4143a2633856bdfd5161e111711b7cee6a\"" Apr 21 10:47:47.691343 containerd[1480]: time="2026-04-21T10:47:47.691299431Z" level=info msg="StartContainer for \"47303cae798afe2e60a8e56edee2da4143a2633856bdfd5161e111711b7cee6a\"" Apr 21 10:47:47.710053 systemd[1]: run-containerd-runc-k8s.io-47303cae798afe2e60a8e56edee2da4143a2633856bdfd5161e111711b7cee6a-runc.yMZ6l2.mount: Deactivated successfully. Apr 21 10:47:47.722012 systemd[1]: Started cri-containerd-47303cae798afe2e60a8e56edee2da4143a2633856bdfd5161e111711b7cee6a.scope - libcontainer container 47303cae798afe2e60a8e56edee2da4143a2633856bdfd5161e111711b7cee6a. Apr 21 10:47:47.741316 containerd[1480]: time="2026-04-21T10:47:47.741276613Z" level=info msg="StartContainer for \"47303cae798afe2e60a8e56edee2da4143a2633856bdfd5161e111711b7cee6a\" returns successfully" Apr 21 10:47:48.181936 kubelet[2526]: I0421 10:47:48.181813 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-jk8x9" podStartSLOduration=1.5782491090000001 podStartE2EDuration="5.181798245s" podCreationTimestamp="2026-04-21 10:47:43 +0000 UTC" firstStartedPulling="2026-04-21 10:47:44.074628363 +0000 UTC m=+6.000926246" lastFinishedPulling="2026-04-21 10:47:47.678177497 +0000 UTC m=+9.604475382" observedRunningTime="2026-04-21 10:47:48.181630814 +0000 UTC m=+10.107928698" watchObservedRunningTime="2026-04-21 10:47:48.181798245 +0000 UTC m=+10.108096140" Apr 21 10:47:48.543736 kubelet[2526]: E0421 10:47:48.543583 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:48.843353 kubelet[2526]: E0421 10:47:48.842724 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:49.176562 kubelet[2526]: E0421 10:47:49.176311 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:49.176891 kubelet[2526]: E0421 10:47:49.176880 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:52.827623 sudo[1656]: pam_unix(sudo:session): session closed for user root Apr 21 10:47:52.832255 sshd[1653]: pam_unix(sshd:session): session closed for user core Apr 21 10:47:52.839842 systemd[1]: sshd@6-10.0.0.157:22-10.0.0.1:33390.service: Deactivated successfully. Apr 21 10:47:52.842149 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:47:52.842260 systemd[1]: session-7.scope: Consumed 3.995s CPU time, 162.6M memory peak, 0B memory swap peak. Apr 21 10:47:52.846233 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:47:52.849102 systemd-logind[1466]: Removed session 7. Apr 21 10:47:54.436391 systemd[1]: Created slice kubepods-besteffort-pod8a755830_6b89_46c9_8825_5cc10f0f3250.slice - libcontainer container kubepods-besteffort-pod8a755830_6b89_46c9_8825_5cc10f0f3250.slice. Apr 21 10:47:54.473342 systemd[1]: Created slice kubepods-besteffort-pod51445b29_67d6_48ff_92d7_0bd273b09e1d.slice - libcontainer container kubepods-besteffort-pod51445b29_67d6_48ff_92d7_0bd273b09e1d.slice. Apr 21 10:47:54.479214 kubelet[2526]: I0421 10:47:54.479183 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzqh\" (UniqueName: \"kubernetes.io/projected/8a755830-6b89-46c9-8825-5cc10f0f3250-kube-api-access-5rzqh\") pod \"calico-typha-64c7d7d5f5-xdgdx\" (UID: \"8a755830-6b89-46c9-8825-5cc10f0f3250\") " pod="calico-system/calico-typha-64c7d7d5f5-xdgdx" Apr 21 10:47:54.479469 kubelet[2526]: I0421 10:47:54.479225 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a755830-6b89-46c9-8825-5cc10f0f3250-tigera-ca-bundle\") pod \"calico-typha-64c7d7d5f5-xdgdx\" (UID: \"8a755830-6b89-46c9-8825-5cc10f0f3250\") " pod="calico-system/calico-typha-64c7d7d5f5-xdgdx" Apr 21 10:47:54.479469 kubelet[2526]: I0421 10:47:54.479243 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-flexvol-driver-host\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.479469 kubelet[2526]: I0421 10:47:54.479255 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/51445b29-67d6-48ff-92d7-0bd273b09e1d-node-certs\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.479469 kubelet[2526]: I0421 10:47:54.479270 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-bpffs\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483005 kubelet[2526]: I0421 10:47:54.482910 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-policysync\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483005 kubelet[2526]: I0421 10:47:54.482946 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51445b29-67d6-48ff-92d7-0bd273b09e1d-tigera-ca-bundle\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483005 kubelet[2526]: I0421 10:47:54.482964 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-xtables-lock\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483005 kubelet[2526]: I0421 10:47:54.482976 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-sys-fs\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483005 kubelet[2526]: I0421 10:47:54.482990 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-var-run-calico\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483189 kubelet[2526]: I0421 10:47:54.483002 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-cni-bin-dir\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483189 kubelet[2526]: I0421 10:47:54.483014 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-nodeproc\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483189 kubelet[2526]: I0421 10:47:54.483024 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-var-lib-calico\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483189 kubelet[2526]: I0421 10:47:54.483037 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-lib-modules\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483189 kubelet[2526]: I0421 10:47:54.483049 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8a755830-6b89-46c9-8825-5cc10f0f3250-typha-certs\") pod \"calico-typha-64c7d7d5f5-xdgdx\" (UID: \"8a755830-6b89-46c9-8825-5cc10f0f3250\") " pod="calico-system/calico-typha-64c7d7d5f5-xdgdx" Apr 21 10:47:54.483270 kubelet[2526]: I0421 10:47:54.483060 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-cni-log-dir\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483270 kubelet[2526]: I0421 10:47:54.483070 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/51445b29-67d6-48ff-92d7-0bd273b09e1d-cni-net-dir\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.483270 kubelet[2526]: I0421 10:47:54.483082 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgk69\" (UniqueName: \"kubernetes.io/projected/51445b29-67d6-48ff-92d7-0bd273b09e1d-kube-api-access-kgk69\") pod \"calico-node-xwrsn\" (UID: \"51445b29-67d6-48ff-92d7-0bd273b09e1d\") " pod="calico-system/calico-node-xwrsn" Apr 21 10:47:54.555036 kubelet[2526]: E0421 10:47:54.554992 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:47:54.584118 kubelet[2526]: I0421 10:47:54.583873 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cde7865-530b-49b0-8623-5246c29d042b-kubelet-dir\") pod \"csi-node-driver-mzjww\" (UID: \"4cde7865-530b-49b0-8623-5246c29d042b\") " pod="calico-system/csi-node-driver-mzjww" Apr 21 10:47:54.584118 kubelet[2526]: I0421 10:47:54.583940 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4cde7865-530b-49b0-8623-5246c29d042b-varrun\") pod \"csi-node-driver-mzjww\" (UID: \"4cde7865-530b-49b0-8623-5246c29d042b\") " pod="calico-system/csi-node-driver-mzjww" Apr 21 10:47:54.584118 kubelet[2526]: I0421 10:47:54.584006 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4cde7865-530b-49b0-8623-5246c29d042b-socket-dir\") pod \"csi-node-driver-mzjww\" (UID: \"4cde7865-530b-49b0-8623-5246c29d042b\") " pod="calico-system/csi-node-driver-mzjww" Apr 21 10:47:54.584286 kubelet[2526]: I0421 10:47:54.584120 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pw4\" (UniqueName: \"kubernetes.io/projected/4cde7865-530b-49b0-8623-5246c29d042b-kube-api-access-d4pw4\") pod \"csi-node-driver-mzjww\" (UID: \"4cde7865-530b-49b0-8623-5246c29d042b\") " pod="calico-system/csi-node-driver-mzjww" Apr 21 10:47:54.584286 kubelet[2526]: I0421 10:47:54.584240 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4cde7865-530b-49b0-8623-5246c29d042b-registration-dir\") pod \"csi-node-driver-mzjww\" (UID: \"4cde7865-530b-49b0-8623-5246c29d042b\") " pod="calico-system/csi-node-driver-mzjww" Apr 21 10:47:54.592307 kubelet[2526]: E0421 10:47:54.592283 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.592307 kubelet[2526]: W0421 10:47:54.592301 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.592403 kubelet[2526]: E0421 10:47:54.592316 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.592577 kubelet[2526]: E0421 10:47:54.592565 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.592577 kubelet[2526]: W0421 10:47:54.592577 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.592611 kubelet[2526]: E0421 10:47:54.592584 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.597444 kubelet[2526]: E0421 10:47:54.597430 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.597614 kubelet[2526]: W0421 10:47:54.597513 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.597614 kubelet[2526]: E0421 10:47:54.597528 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.597775 kubelet[2526]: E0421 10:47:54.597767 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.597831 kubelet[2526]: W0421 10:47:54.597806 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.597831 kubelet[2526]: E0421 10:47:54.597817 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.684687 kubelet[2526]: E0421 10:47:54.684647 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.684794 kubelet[2526]: W0421 10:47:54.684702 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.684794 kubelet[2526]: E0421 10:47:54.684719 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.685018 kubelet[2526]: E0421 10:47:54.685004 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.685045 kubelet[2526]: W0421 10:47:54.685019 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.685045 kubelet[2526]: E0421 10:47:54.685028 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.685368 kubelet[2526]: E0421 10:47:54.685350 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.685425 kubelet[2526]: W0421 10:47:54.685370 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.685425 kubelet[2526]: E0421 10:47:54.685381 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.685633 kubelet[2526]: E0421 10:47:54.685620 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.685650 kubelet[2526]: W0421 10:47:54.685634 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.685650 kubelet[2526]: E0421 10:47:54.685641 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.685901 kubelet[2526]: E0421 10:47:54.685842 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.685901 kubelet[2526]: W0421 10:47:54.685889 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.685901 kubelet[2526]: E0421 10:47:54.685895 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.686153 kubelet[2526]: E0421 10:47:54.686129 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.686153 kubelet[2526]: W0421 10:47:54.686147 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.686153 kubelet[2526]: E0421 10:47:54.686157 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.686323 kubelet[2526]: E0421 10:47:54.686310 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.686323 kubelet[2526]: W0421 10:47:54.686321 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.686363 kubelet[2526]: E0421 10:47:54.686328 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.686529 kubelet[2526]: E0421 10:47:54.686476 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.686529 kubelet[2526]: W0421 10:47:54.686487 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.686529 kubelet[2526]: E0421 10:47:54.686493 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.686792 kubelet[2526]: E0421 10:47:54.686779 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.686792 kubelet[2526]: W0421 10:47:54.686791 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.686873 kubelet[2526]: E0421 10:47:54.686798 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.687163 kubelet[2526]: E0421 10:47:54.687088 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.687163 kubelet[2526]: W0421 10:47:54.687115 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.687163 kubelet[2526]: E0421 10:47:54.687124 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.687438 kubelet[2526]: E0421 10:47:54.687341 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.687438 kubelet[2526]: W0421 10:47:54.687350 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.687438 kubelet[2526]: E0421 10:47:54.687359 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.687638 kubelet[2526]: E0421 10:47:54.687586 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.687638 kubelet[2526]: W0421 10:47:54.687598 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.687638 kubelet[2526]: E0421 10:47:54.687605 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.687797 kubelet[2526]: E0421 10:47:54.687781 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.687797 kubelet[2526]: W0421 10:47:54.687794 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.687837 kubelet[2526]: E0421 10:47:54.687801 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.688058 kubelet[2526]: E0421 10:47:54.688019 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.688058 kubelet[2526]: W0421 10:47:54.688031 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.688058 kubelet[2526]: E0421 10:47:54.688038 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.688233 kubelet[2526]: E0421 10:47:54.688218 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.688233 kubelet[2526]: W0421 10:47:54.688230 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.688233 kubelet[2526]: E0421 10:47:54.688236 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.688429 kubelet[2526]: E0421 10:47:54.688416 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.688429 kubelet[2526]: W0421 10:47:54.688427 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.688464 kubelet[2526]: E0421 10:47:54.688433 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.688691 kubelet[2526]: E0421 10:47:54.688649 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.688691 kubelet[2526]: W0421 10:47:54.688662 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.688691 kubelet[2526]: E0421 10:47:54.688667 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.688823 kubelet[2526]: E0421 10:47:54.688807 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.688823 kubelet[2526]: W0421 10:47:54.688821 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.688898 kubelet[2526]: E0421 10:47:54.688829 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.689062 kubelet[2526]: E0421 10:47:54.689031 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.689062 kubelet[2526]: W0421 10:47:54.689047 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.689062 kubelet[2526]: E0421 10:47:54.689053 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.689230 kubelet[2526]: E0421 10:47:54.689217 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.689230 kubelet[2526]: W0421 10:47:54.689228 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.689265 kubelet[2526]: E0421 10:47:54.689234 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.689413 kubelet[2526]: E0421 10:47:54.689401 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.689413 kubelet[2526]: W0421 10:47:54.689411 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.689447 kubelet[2526]: E0421 10:47:54.689417 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.689659 kubelet[2526]: E0421 10:47:54.689640 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.689659 kubelet[2526]: W0421 10:47:54.689657 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.689717 kubelet[2526]: E0421 10:47:54.689690 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.689994 kubelet[2526]: E0421 10:47:54.689948 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.689994 kubelet[2526]: W0421 10:47:54.689961 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.689994 kubelet[2526]: E0421 10:47:54.689968 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.690128 kubelet[2526]: E0421 10:47:54.690117 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.690128 kubelet[2526]: W0421 10:47:54.690125 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.690167 kubelet[2526]: E0421 10:47:54.690131 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.690359 kubelet[2526]: E0421 10:47:54.690340 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.690359 kubelet[2526]: W0421 10:47:54.690354 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.690416 kubelet[2526]: E0421 10:47:54.690363 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.697307 kubelet[2526]: E0421 10:47:54.697291 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:47:54.697307 kubelet[2526]: W0421 10:47:54.697306 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:47:54.697373 kubelet[2526]: E0421 10:47:54.697315 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:47:54.742424 kubelet[2526]: E0421 10:47:54.742387 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:54.743052 containerd[1480]: time="2026-04-21T10:47:54.743017707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64c7d7d5f5-xdgdx,Uid:8a755830-6b89-46c9-8825-5cc10f0f3250,Namespace:calico-system,Attempt:0,}" Apr 21 10:47:54.763501 containerd[1480]: time="2026-04-21T10:47:54.763442193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:47:54.763567 containerd[1480]: time="2026-04-21T10:47:54.763502044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:47:54.763567 containerd[1480]: time="2026-04-21T10:47:54.763517685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:54.763667 containerd[1480]: time="2026-04-21T10:47:54.763576486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:54.776497 containerd[1480]: time="2026-04-21T10:47:54.776467494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xwrsn,Uid:51445b29-67d6-48ff-92d7-0bd273b09e1d,Namespace:calico-system,Attempt:0,}" Apr 21 10:47:54.778015 systemd[1]: Started cri-containerd-77d1728ca481287a703ba3e183522c61e8e7452a7bb7f66b09a84f7ad4e1f214.scope - libcontainer container 77d1728ca481287a703ba3e183522c61e8e7452a7bb7f66b09a84f7ad4e1f214. Apr 21 10:47:54.798545 containerd[1480]: time="2026-04-21T10:47:54.798363843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:47:54.798545 containerd[1480]: time="2026-04-21T10:47:54.798397538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:47:54.798545 containerd[1480]: time="2026-04-21T10:47:54.798415052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:54.798545 containerd[1480]: time="2026-04-21T10:47:54.798492368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:47:54.824079 systemd[1]: Started cri-containerd-1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903.scope - libcontainer container 1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903. Apr 21 10:47:54.824893 containerd[1480]: time="2026-04-21T10:47:54.824819758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64c7d7d5f5-xdgdx,Uid:8a755830-6b89-46c9-8825-5cc10f0f3250,Namespace:calico-system,Attempt:0,} returns sandbox id \"77d1728ca481287a703ba3e183522c61e8e7452a7bb7f66b09a84f7ad4e1f214\"" Apr 21 10:47:54.831627 kubelet[2526]: E0421 10:47:54.830710 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:47:54.837258 containerd[1480]: time="2026-04-21T10:47:54.837234289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:47:54.843219 containerd[1480]: time="2026-04-21T10:47:54.843155127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xwrsn,Uid:51445b29-67d6-48ff-92d7-0bd273b09e1d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903\"" Apr 21 10:47:56.148583 kubelet[2526]: E0421 10:47:56.148531 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:47:56.419239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677020242.mount: Deactivated successfully. Apr 21 10:47:58.146177 kubelet[2526]: E0421 10:47:58.146138 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:00.148373 kubelet[2526]: E0421 10:48:00.148304 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:02.146398 kubelet[2526]: E0421 10:48:02.146157 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:02.159681 update_engine[1469]: I20260421 10:48:02.159576 1469 update_attempter.cc:509] Updating boot flags... Apr 21 10:48:02.178936 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3066) Apr 21 10:48:02.204948 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3066) Apr 21 10:48:04.146431 kubelet[2526]: E0421 10:48:04.146329 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:06.146139 kubelet[2526]: E0421 10:48:06.146100 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:08.146601 kubelet[2526]: E0421 10:48:08.146485 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:10.146443 kubelet[2526]: E0421 10:48:10.146372 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:12.146677 kubelet[2526]: E0421 10:48:12.146596 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:12.659415 containerd[1480]: time="2026-04-21T10:48:12.659333760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:12.660283 containerd[1480]: time="2026-04-21T10:48:12.660208332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:48:12.661488 containerd[1480]: time="2026-04-21T10:48:12.661436405Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:12.663834 containerd[1480]: time="2026-04-21T10:48:12.663809600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:12.664391 containerd[1480]: time="2026-04-21T10:48:12.664361294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 17.827097167s" Apr 21 10:48:12.664391 containerd[1480]: time="2026-04-21T10:48:12.664383337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:48:12.667473 containerd[1480]: time="2026-04-21T10:48:12.667437454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:48:12.685246 containerd[1480]: time="2026-04-21T10:48:12.685198981Z" level=info msg="CreateContainer within sandbox \"77d1728ca481287a703ba3e183522c61e8e7452a7bb7f66b09a84f7ad4e1f214\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:48:12.697806 containerd[1480]: time="2026-04-21T10:48:12.697739879Z" level=info msg="CreateContainer within sandbox \"77d1728ca481287a703ba3e183522c61e8e7452a7bb7f66b09a84f7ad4e1f214\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f3c853ff474102cc8b3972bf3f70e05a86aa5e21b8d60053db3294dfb7a7a33f\"" Apr 21 10:48:12.698790 containerd[1480]: time="2026-04-21T10:48:12.698742024Z" level=info msg="StartContainer for \"f3c853ff474102cc8b3972bf3f70e05a86aa5e21b8d60053db3294dfb7a7a33f\"" Apr 21 10:48:12.729125 systemd[1]: Started cri-containerd-f3c853ff474102cc8b3972bf3f70e05a86aa5e21b8d60053db3294dfb7a7a33f.scope - libcontainer container f3c853ff474102cc8b3972bf3f70e05a86aa5e21b8d60053db3294dfb7a7a33f. Apr 21 10:48:12.772708 containerd[1480]: time="2026-04-21T10:48:12.772613222Z" level=info msg="StartContainer for \"f3c853ff474102cc8b3972bf3f70e05a86aa5e21b8d60053db3294dfb7a7a33f\" returns successfully" Apr 21 10:48:13.232275 kubelet[2526]: E0421 10:48:13.232211 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:13.252593 kubelet[2526]: I0421 10:48:13.252515 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64c7d7d5f5-xdgdx" podStartSLOduration=1.416669317 podStartE2EDuration="19.252504149s" podCreationTimestamp="2026-04-21 10:47:54 +0000 UTC" firstStartedPulling="2026-04-21 10:47:54.831418331 +0000 UTC m=+16.757716215" lastFinishedPulling="2026-04-21 10:48:12.667253161 +0000 UTC m=+34.593551047" observedRunningTime="2026-04-21 10:48:13.252437103 +0000 UTC m=+35.178735001" watchObservedRunningTime="2026-04-21 10:48:13.252504149 +0000 UTC m=+35.178802044" Apr 21 10:48:13.298758 kubelet[2526]: E0421 10:48:13.298695 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.298758 kubelet[2526]: W0421 10:48:13.298724 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.298758 kubelet[2526]: E0421 10:48:13.298752 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.299032 kubelet[2526]: E0421 10:48:13.298973 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.299032 kubelet[2526]: W0421 10:48:13.298979 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.299032 kubelet[2526]: E0421 10:48:13.298985 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.299174 kubelet[2526]: E0421 10:48:13.299152 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.299174 kubelet[2526]: W0421 10:48:13.299165 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.299174 kubelet[2526]: E0421 10:48:13.299170 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.299378 kubelet[2526]: E0421 10:48:13.299345 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.299378 kubelet[2526]: W0421 10:48:13.299359 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.299378 kubelet[2526]: E0421 10:48:13.299365 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.299537 kubelet[2526]: E0421 10:48:13.299509 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.299537 kubelet[2526]: W0421 10:48:13.299522 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.299537 kubelet[2526]: E0421 10:48:13.299528 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.299673 kubelet[2526]: E0421 10:48:13.299654 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.299673 kubelet[2526]: W0421 10:48:13.299664 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.299673 kubelet[2526]: E0421 10:48:13.299669 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.299806 kubelet[2526]: E0421 10:48:13.299788 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.299806 kubelet[2526]: W0421 10:48:13.299799 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.299806 kubelet[2526]: E0421 10:48:13.299803 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.299971 kubelet[2526]: E0421 10:48:13.299954 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.299971 kubelet[2526]: W0421 10:48:13.299965 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.299971 kubelet[2526]: E0421 10:48:13.299971 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.300152 kubelet[2526]: E0421 10:48:13.300119 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.300152 kubelet[2526]: W0421 10:48:13.300134 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.300152 kubelet[2526]: E0421 10:48:13.300140 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.300333 kubelet[2526]: E0421 10:48:13.300315 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.300333 kubelet[2526]: W0421 10:48:13.300326 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.300392 kubelet[2526]: E0421 10:48:13.300331 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.300482 kubelet[2526]: E0421 10:48:13.300465 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.300482 kubelet[2526]: W0421 10:48:13.300476 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.300482 kubelet[2526]: E0421 10:48:13.300481 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.300630 kubelet[2526]: E0421 10:48:13.300614 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.300630 kubelet[2526]: W0421 10:48:13.300624 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.300630 kubelet[2526]: E0421 10:48:13.300629 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.300777 kubelet[2526]: E0421 10:48:13.300760 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.300777 kubelet[2526]: W0421 10:48:13.300771 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.300777 kubelet[2526]: E0421 10:48:13.300777 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.301298 kubelet[2526]: E0421 10:48:13.301269 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.301298 kubelet[2526]: W0421 10:48:13.301295 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.301372 kubelet[2526]: E0421 10:48:13.301325 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.301912 kubelet[2526]: E0421 10:48:13.301813 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.301912 kubelet[2526]: W0421 10:48:13.301837 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.301912 kubelet[2526]: E0421 10:48:13.301897 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.321165 kubelet[2526]: E0421 10:48:13.321048 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.321165 kubelet[2526]: W0421 10:48:13.321101 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.321165 kubelet[2526]: E0421 10:48:13.321124 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.321426 kubelet[2526]: E0421 10:48:13.321407 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.321426 kubelet[2526]: W0421 10:48:13.321425 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.321492 kubelet[2526]: E0421 10:48:13.321432 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.321678 kubelet[2526]: E0421 10:48:13.321620 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.321678 kubelet[2526]: W0421 10:48:13.321635 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.321678 kubelet[2526]: E0421 10:48:13.321642 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.321955 kubelet[2526]: E0421 10:48:13.321904 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.321955 kubelet[2526]: W0421 10:48:13.321928 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.321955 kubelet[2526]: E0421 10:48:13.321938 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.322171 kubelet[2526]: E0421 10:48:13.322139 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.322171 kubelet[2526]: W0421 10:48:13.322151 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.322171 kubelet[2526]: E0421 10:48:13.322157 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.322323 kubelet[2526]: E0421 10:48:13.322297 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.322323 kubelet[2526]: W0421 10:48:13.322308 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.322323 kubelet[2526]: E0421 10:48:13.322314 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.322554 kubelet[2526]: E0421 10:48:13.322529 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.322554 kubelet[2526]: W0421 10:48:13.322553 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.322640 kubelet[2526]: E0421 10:48:13.322568 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.322912 kubelet[2526]: E0421 10:48:13.322885 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.322942 kubelet[2526]: W0421 10:48:13.322927 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.322958 kubelet[2526]: E0421 10:48:13.322943 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.323181 kubelet[2526]: E0421 10:48:13.323168 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.323201 kubelet[2526]: W0421 10:48:13.323181 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.323201 kubelet[2526]: E0421 10:48:13.323188 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.323352 kubelet[2526]: E0421 10:48:13.323340 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.323352 kubelet[2526]: W0421 10:48:13.323351 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.323389 kubelet[2526]: E0421 10:48:13.323357 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.323508 kubelet[2526]: E0421 10:48:13.323496 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.323508 kubelet[2526]: W0421 10:48:13.323507 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.323541 kubelet[2526]: E0421 10:48:13.323513 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.323682 kubelet[2526]: E0421 10:48:13.323670 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.323682 kubelet[2526]: W0421 10:48:13.323681 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.323716 kubelet[2526]: E0421 10:48:13.323686 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.323987 kubelet[2526]: E0421 10:48:13.323971 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.323987 kubelet[2526]: W0421 10:48:13.323983 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.324046 kubelet[2526]: E0421 10:48:13.324003 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.324271 kubelet[2526]: E0421 10:48:13.324254 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.324292 kubelet[2526]: W0421 10:48:13.324270 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.324292 kubelet[2526]: E0421 10:48:13.324279 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.324470 kubelet[2526]: E0421 10:48:13.324457 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.324470 kubelet[2526]: W0421 10:48:13.324467 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.324503 kubelet[2526]: E0421 10:48:13.324472 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.324654 kubelet[2526]: E0421 10:48:13.324642 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.324654 kubelet[2526]: W0421 10:48:13.324653 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.324686 kubelet[2526]: E0421 10:48:13.324659 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.324926 kubelet[2526]: E0421 10:48:13.324913 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.324926 kubelet[2526]: W0421 10:48:13.324925 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.324968 kubelet[2526]: E0421 10:48:13.324931 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:13.325108 kubelet[2526]: E0421 10:48:13.325095 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:13.325108 kubelet[2526]: W0421 10:48:13.325107 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:13.325141 kubelet[2526]: E0421 10:48:13.325112 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.146731 kubelet[2526]: E0421 10:48:14.146640 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:14.234038 kubelet[2526]: E0421 10:48:14.233522 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:14.309458 kubelet[2526]: E0421 10:48:14.309424 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.309458 kubelet[2526]: W0421 10:48:14.309446 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.309458 kubelet[2526]: E0421 10:48:14.309467 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.309745 kubelet[2526]: E0421 10:48:14.309697 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.309745 kubelet[2526]: W0421 10:48:14.309703 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.309745 kubelet[2526]: E0421 10:48:14.309710 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.309988 kubelet[2526]: E0421 10:48:14.309936 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.309988 kubelet[2526]: W0421 10:48:14.309955 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.309988 kubelet[2526]: E0421 10:48:14.309965 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.310258 kubelet[2526]: E0421 10:48:14.310242 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.310258 kubelet[2526]: W0421 10:48:14.310254 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.310326 kubelet[2526]: E0421 10:48:14.310260 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.310690 kubelet[2526]: E0421 10:48:14.310668 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.310690 kubelet[2526]: W0421 10:48:14.310687 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.310764 kubelet[2526]: E0421 10:48:14.310699 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.310937 kubelet[2526]: E0421 10:48:14.310915 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.311013 kubelet[2526]: W0421 10:48:14.310937 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.311013 kubelet[2526]: E0421 10:48:14.310950 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.311171 kubelet[2526]: E0421 10:48:14.311154 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.311207 kubelet[2526]: W0421 10:48:14.311171 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.311207 kubelet[2526]: E0421 10:48:14.311180 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.311405 kubelet[2526]: E0421 10:48:14.311385 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.311405 kubelet[2526]: W0421 10:48:14.311400 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.311464 kubelet[2526]: E0421 10:48:14.311409 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.311623 kubelet[2526]: E0421 10:48:14.311598 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.311623 kubelet[2526]: W0421 10:48:14.311615 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.311623 kubelet[2526]: E0421 10:48:14.311625 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.311814 kubelet[2526]: E0421 10:48:14.311777 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.311814 kubelet[2526]: W0421 10:48:14.311796 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.311814 kubelet[2526]: E0421 10:48:14.311805 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.312032 kubelet[2526]: E0421 10:48:14.312012 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.312032 kubelet[2526]: W0421 10:48:14.312027 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.312096 kubelet[2526]: E0421 10:48:14.312034 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.312278 kubelet[2526]: E0421 10:48:14.312261 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.312278 kubelet[2526]: W0421 10:48:14.312276 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.312336 kubelet[2526]: E0421 10:48:14.312283 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.312481 kubelet[2526]: E0421 10:48:14.312460 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.312481 kubelet[2526]: W0421 10:48:14.312475 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.312556 kubelet[2526]: E0421 10:48:14.312481 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.312735 kubelet[2526]: E0421 10:48:14.312626 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.312735 kubelet[2526]: W0421 10:48:14.312633 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.312735 kubelet[2526]: E0421 10:48:14.312640 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.313259 kubelet[2526]: E0421 10:48:14.312775 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.313259 kubelet[2526]: W0421 10:48:14.312781 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.313259 kubelet[2526]: E0421 10:48:14.312788 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.329521 kubelet[2526]: E0421 10:48:14.329438 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.329521 kubelet[2526]: W0421 10:48:14.329476 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.329521 kubelet[2526]: E0421 10:48:14.329508 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.329825 kubelet[2526]: E0421 10:48:14.329801 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.329825 kubelet[2526]: W0421 10:48:14.329819 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.329825 kubelet[2526]: E0421 10:48:14.329828 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.330178 kubelet[2526]: E0421 10:48:14.330157 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.330178 kubelet[2526]: W0421 10:48:14.330174 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.330178 kubelet[2526]: E0421 10:48:14.330182 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.330409 kubelet[2526]: E0421 10:48:14.330391 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.330459 kubelet[2526]: W0421 10:48:14.330410 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.330459 kubelet[2526]: E0421 10:48:14.330420 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.330634 kubelet[2526]: E0421 10:48:14.330617 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.330634 kubelet[2526]: W0421 10:48:14.330630 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.330634 kubelet[2526]: E0421 10:48:14.330637 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.330895 kubelet[2526]: E0421 10:48:14.330878 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.330895 kubelet[2526]: W0421 10:48:14.330893 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.330964 kubelet[2526]: E0421 10:48:14.330906 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.331777 kubelet[2526]: E0421 10:48:14.331746 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.331777 kubelet[2526]: W0421 10:48:14.331754 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.331777 kubelet[2526]: E0421 10:48:14.331762 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.331992 kubelet[2526]: E0421 10:48:14.331975 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.332047 kubelet[2526]: W0421 10:48:14.331994 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.332047 kubelet[2526]: E0421 10:48:14.332002 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.332621 kubelet[2526]: E0421 10:48:14.332551 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.332621 kubelet[2526]: W0421 10:48:14.332583 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.332621 kubelet[2526]: E0421 10:48:14.332606 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.332922 kubelet[2526]: E0421 10:48:14.332836 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.332922 kubelet[2526]: W0421 10:48:14.332842 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.332922 kubelet[2526]: E0421 10:48:14.332882 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.333088 kubelet[2526]: E0421 10:48:14.333058 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.333115 kubelet[2526]: W0421 10:48:14.333087 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.333115 kubelet[2526]: E0421 10:48:14.333097 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.333386 kubelet[2526]: E0421 10:48:14.333337 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.333386 kubelet[2526]: W0421 10:48:14.333358 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.333386 kubelet[2526]: E0421 10:48:14.333368 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.333592 kubelet[2526]: E0421 10:48:14.333569 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.333592 kubelet[2526]: W0421 10:48:14.333575 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.333592 kubelet[2526]: E0421 10:48:14.333581 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.333857 kubelet[2526]: E0421 10:48:14.333819 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.333857 kubelet[2526]: W0421 10:48:14.333833 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.333908 kubelet[2526]: E0421 10:48:14.333841 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.334161 kubelet[2526]: E0421 10:48:14.334142 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.334161 kubelet[2526]: W0421 10:48:14.334149 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.334161 kubelet[2526]: E0421 10:48:14.334155 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.334640 kubelet[2526]: E0421 10:48:14.334599 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.334640 kubelet[2526]: W0421 10:48:14.334618 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.334640 kubelet[2526]: E0421 10:48:14.334625 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.334997 kubelet[2526]: E0421 10:48:14.334979 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.334997 kubelet[2526]: W0421 10:48:14.334993 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.335122 kubelet[2526]: E0421 10:48:14.335002 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.335210 kubelet[2526]: E0421 10:48:14.335177 2526 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:48:14.335210 kubelet[2526]: W0421 10:48:14.335185 2526 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:48:14.335210 kubelet[2526]: E0421 10:48:14.335192 2526 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:48:14.378439 containerd[1480]: time="2026-04-21T10:48:14.378369727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:14.379064 containerd[1480]: time="2026-04-21T10:48:14.378973835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:48:14.380288 containerd[1480]: time="2026-04-21T10:48:14.380218170Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:14.382098 containerd[1480]: time="2026-04-21T10:48:14.382037639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:14.382653 containerd[1480]: time="2026-04-21T10:48:14.382613532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.715152092s" Apr 21 10:48:14.382653 containerd[1480]: time="2026-04-21T10:48:14.382642873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:48:14.386461 containerd[1480]: time="2026-04-21T10:48:14.386392743Z" level=info msg="CreateContainer within sandbox \"1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:48:14.399335 containerd[1480]: time="2026-04-21T10:48:14.399241273Z" level=info msg="CreateContainer within sandbox \"1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"846d22fb93d1cfc9db00ae879a84a643135db34d0fca0247906b20a6c7e50383\"" Apr 21 10:48:14.400166 containerd[1480]: time="2026-04-21T10:48:14.400135195Z" level=info msg="StartContainer for \"846d22fb93d1cfc9db00ae879a84a643135db34d0fca0247906b20a6c7e50383\"" Apr 21 10:48:14.441093 systemd[1]: Started cri-containerd-846d22fb93d1cfc9db00ae879a84a643135db34d0fca0247906b20a6c7e50383.scope - libcontainer container 846d22fb93d1cfc9db00ae879a84a643135db34d0fca0247906b20a6c7e50383. Apr 21 10:48:14.467982 systemd[1]: cri-containerd-846d22fb93d1cfc9db00ae879a84a643135db34d0fca0247906b20a6c7e50383.scope: Deactivated successfully. Apr 21 10:48:14.472486 containerd[1480]: time="2026-04-21T10:48:14.472440341Z" level=info msg="StartContainer for \"846d22fb93d1cfc9db00ae879a84a643135db34d0fca0247906b20a6c7e50383\" returns successfully" Apr 21 10:48:14.487701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-846d22fb93d1cfc9db00ae879a84a643135db34d0fca0247906b20a6c7e50383-rootfs.mount: Deactivated successfully. Apr 21 10:48:14.528397 containerd[1480]: time="2026-04-21T10:48:14.526425783Z" level=info msg="shim disconnected" id=846d22fb93d1cfc9db00ae879a84a643135db34d0fca0247906b20a6c7e50383 namespace=k8s.io Apr 21 10:48:14.528397 containerd[1480]: time="2026-04-21T10:48:14.528389793Z" level=warning msg="cleaning up after shim disconnected" id=846d22fb93d1cfc9db00ae879a84a643135db34d0fca0247906b20a6c7e50383 namespace=k8s.io Apr 21 10:48:14.528397 containerd[1480]: time="2026-04-21T10:48:14.528406434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:48:15.238531 kubelet[2526]: E0421 10:48:15.238494 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:15.239171 containerd[1480]: time="2026-04-21T10:48:15.239138516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:48:16.146292 kubelet[2526]: E0421 10:48:16.146204 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:16.849098 systemd[1]: Started sshd@7-10.0.0.157:22-10.0.0.1:38418.service - OpenSSH per-connection server daemon (10.0.0.1:38418). Apr 21 10:48:16.882353 sshd[3270]: Accepted publickey for core from 10.0.0.1 port 38418 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:16.883763 sshd[3270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:16.887346 systemd-logind[1466]: New session 8 of user core. Apr 21 10:48:16.894023 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:48:16.997591 sshd[3270]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:17.000570 systemd[1]: sshd@7-10.0.0.157:22-10.0.0.1:38418.service: Deactivated successfully. Apr 21 10:48:17.002123 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:48:17.002576 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:48:17.003422 systemd-logind[1466]: Removed session 8. Apr 21 10:48:18.147459 kubelet[2526]: E0421 10:48:18.147363 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:20.147438 kubelet[2526]: E0421 10:48:20.147124 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:22.009118 systemd[1]: Started sshd@8-10.0.0.157:22-10.0.0.1:38420.service - OpenSSH per-connection server daemon (10.0.0.1:38420). Apr 21 10:48:22.050877 sshd[3290]: Accepted publickey for core from 10.0.0.1 port 38420 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:22.052244 sshd[3290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:22.055968 systemd-logind[1466]: New session 9 of user core. Apr 21 10:48:22.063203 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:48:22.147008 kubelet[2526]: E0421 10:48:22.146957 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:22.203072 sshd[3290]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:22.206708 systemd[1]: sshd@8-10.0.0.157:22-10.0.0.1:38420.service: Deactivated successfully. Apr 21 10:48:22.208062 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:48:22.211047 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:48:22.212048 systemd-logind[1466]: Removed session 9. Apr 21 10:48:22.469572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount241683123.mount: Deactivated successfully. Apr 21 10:48:22.691687 containerd[1480]: time="2026-04-21T10:48:22.691556647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:22.693281 containerd[1480]: time="2026-04-21T10:48:22.693192057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:48:22.695002 containerd[1480]: time="2026-04-21T10:48:22.694924691Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:22.701469 containerd[1480]: time="2026-04-21T10:48:22.701378381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:22.702286 containerd[1480]: time="2026-04-21T10:48:22.702229163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.463048058s" Apr 21 10:48:22.702286 containerd[1480]: time="2026-04-21T10:48:22.702274302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:48:22.718896 containerd[1480]: time="2026-04-21T10:48:22.718662893Z" level=info msg="CreateContainer within sandbox \"1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:48:22.802633 containerd[1480]: time="2026-04-21T10:48:22.802420540Z" level=info msg="CreateContainer within sandbox \"1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"5f55d9cda76411580bfd743f4d2e3802b4f78380f9afff2ca6111caddd248527\"" Apr 21 10:48:22.805841 containerd[1480]: time="2026-04-21T10:48:22.805778486Z" level=info msg="StartContainer for \"5f55d9cda76411580bfd743f4d2e3802b4f78380f9afff2ca6111caddd248527\"" Apr 21 10:48:23.002347 systemd[1]: Started cri-containerd-5f55d9cda76411580bfd743f4d2e3802b4f78380f9afff2ca6111caddd248527.scope - libcontainer container 5f55d9cda76411580bfd743f4d2e3802b4f78380f9afff2ca6111caddd248527. Apr 21 10:48:23.111169 containerd[1480]: time="2026-04-21T10:48:23.110230868Z" level=info msg="StartContainer for \"5f55d9cda76411580bfd743f4d2e3802b4f78380f9afff2ca6111caddd248527\" returns successfully" Apr 21 10:48:23.207465 systemd[1]: cri-containerd-5f55d9cda76411580bfd743f4d2e3802b4f78380f9afff2ca6111caddd248527.scope: Deactivated successfully. Apr 21 10:48:23.315346 containerd[1480]: time="2026-04-21T10:48:23.314403245Z" level=info msg="shim disconnected" id=5f55d9cda76411580bfd743f4d2e3802b4f78380f9afff2ca6111caddd248527 namespace=k8s.io Apr 21 10:48:23.318652 containerd[1480]: time="2026-04-21T10:48:23.315830212Z" level=warning msg="cleaning up after shim disconnected" id=5f55d9cda76411580bfd743f4d2e3802b4f78380f9afff2ca6111caddd248527 namespace=k8s.io Apr 21 10:48:23.319353 containerd[1480]: time="2026-04-21T10:48:23.318996637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:48:23.352964 containerd[1480]: time="2026-04-21T10:48:23.352247555Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:48:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:48:23.472841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f55d9cda76411580bfd743f4d2e3802b4f78380f9afff2ca6111caddd248527-rootfs.mount: Deactivated successfully. Apr 21 10:48:24.148265 kubelet[2526]: E0421 10:48:24.146997 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:24.277518 containerd[1480]: time="2026-04-21T10:48:24.277347355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:48:26.147435 kubelet[2526]: E0421 10:48:26.146802 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:27.225725 systemd[1]: Started sshd@9-10.0.0.157:22-10.0.0.1:55528.service - OpenSSH per-connection server daemon (10.0.0.1:55528). Apr 21 10:48:27.264817 sshd[3374]: Accepted publickey for core from 10.0.0.1 port 55528 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:27.266314 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:27.270153 systemd-logind[1466]: New session 10 of user core. Apr 21 10:48:27.279204 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:48:27.402647 sshd[3374]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:27.405897 systemd[1]: sshd@9-10.0.0.157:22-10.0.0.1:55528.service: Deactivated successfully. Apr 21 10:48:27.407236 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:48:27.407901 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:48:27.408674 systemd-logind[1466]: Removed session 10. Apr 21 10:48:28.147128 kubelet[2526]: E0421 10:48:28.147031 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzjww" podUID="4cde7865-530b-49b0-8623-5246c29d042b" Apr 21 10:48:28.415127 containerd[1480]: time="2026-04-21T10:48:28.414754748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:28.415917 containerd[1480]: time="2026-04-21T10:48:28.415873735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:48:28.417415 containerd[1480]: time="2026-04-21T10:48:28.417367381Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:28.422328 containerd[1480]: time="2026-04-21T10:48:28.422256864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:28.423748 containerd[1480]: time="2026-04-21T10:48:28.423691607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.14628412s" Apr 21 10:48:28.423748 containerd[1480]: time="2026-04-21T10:48:28.423730457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:48:28.433885 containerd[1480]: time="2026-04-21T10:48:28.433802682Z" level=info msg="CreateContainer within sandbox \"1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:48:28.454024 containerd[1480]: time="2026-04-21T10:48:28.453954072Z" level=info msg="CreateContainer within sandbox \"1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3b9460521f7cb43e24d22fa030169bbfe8eb7d905a20ddfb4118dc913d1e5476\"" Apr 21 10:48:28.454646 containerd[1480]: time="2026-04-21T10:48:28.454570452Z" level=info msg="StartContainer for \"3b9460521f7cb43e24d22fa030169bbfe8eb7d905a20ddfb4118dc913d1e5476\"" Apr 21 10:48:28.487065 systemd[1]: Started cri-containerd-3b9460521f7cb43e24d22fa030169bbfe8eb7d905a20ddfb4118dc913d1e5476.scope - libcontainer container 3b9460521f7cb43e24d22fa030169bbfe8eb7d905a20ddfb4118dc913d1e5476. Apr 21 10:48:28.518016 containerd[1480]: time="2026-04-21T10:48:28.517957393Z" level=info msg="StartContainer for \"3b9460521f7cb43e24d22fa030169bbfe8eb7d905a20ddfb4118dc913d1e5476\" returns successfully" Apr 21 10:48:28.949738 systemd[1]: cri-containerd-3b9460521f7cb43e24d22fa030169bbfe8eb7d905a20ddfb4118dc913d1e5476.scope: Deactivated successfully. Apr 21 10:48:28.969940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b9460521f7cb43e24d22fa030169bbfe8eb7d905a20ddfb4118dc913d1e5476-rootfs.mount: Deactivated successfully. Apr 21 10:48:28.991974 containerd[1480]: time="2026-04-21T10:48:28.991904965Z" level=info msg="shim disconnected" id=3b9460521f7cb43e24d22fa030169bbfe8eb7d905a20ddfb4118dc913d1e5476 namespace=k8s.io Apr 21 10:48:28.991974 containerd[1480]: time="2026-04-21T10:48:28.991958247Z" level=warning msg="cleaning up after shim disconnected" id=3b9460521f7cb43e24d22fa030169bbfe8eb7d905a20ddfb4118dc913d1e5476 namespace=k8s.io Apr 21 10:48:28.991974 containerd[1480]: time="2026-04-21T10:48:28.991966394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:48:29.015758 kubelet[2526]: I0421 10:48:29.015722 2526 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:48:29.062799 systemd[1]: Created slice kubepods-burstable-pod8a4497d2_deff_4a0e_bc07_c304fe7a4c6e.slice - libcontainer container kubepods-burstable-pod8a4497d2_deff_4a0e_bc07_c304fe7a4c6e.slice. Apr 21 10:48:29.072259 systemd[1]: Created slice kubepods-burstable-poda09f8f7d_21a8_4bda_84fc_452be1c07e7a.slice - libcontainer container kubepods-burstable-poda09f8f7d_21a8_4bda_84fc_452be1c07e7a.slice. Apr 21 10:48:29.075359 systemd[1]: Created slice kubepods-besteffort-podc39c62dd_7e89_4776_bcdb_f9958f5483e7.slice - libcontainer container kubepods-besteffort-podc39c62dd_7e89_4776_bcdb_f9958f5483e7.slice. Apr 21 10:48:29.081342 systemd[1]: Created slice kubepods-besteffort-pod90cacd3d_2a63_40bf_a72f_67fa5f649e88.slice - libcontainer container kubepods-besteffort-pod90cacd3d_2a63_40bf_a72f_67fa5f649e88.slice. Apr 21 10:48:29.086352 systemd[1]: Created slice kubepods-besteffort-poddd1ae5d1_50ed_4274_8444_94f487a665ed.slice - libcontainer container kubepods-besteffort-poddd1ae5d1_50ed_4274_8444_94f487a665ed.slice. Apr 21 10:48:29.091501 systemd[1]: Created slice kubepods-besteffort-podccfa94ae_c2bb_42dc_bf66_d035a841d8b5.slice - libcontainer container kubepods-besteffort-podccfa94ae_c2bb_42dc_bf66_d035a841d8b5.slice. Apr 21 10:48:29.095491 systemd[1]: Created slice kubepods-besteffort-pod97d51a84_9526_4bc2_b6fe_1533fc722244.slice - libcontainer container kubepods-besteffort-pod97d51a84_9526_4bc2_b6fe_1533fc722244.slice. Apr 21 10:48:29.157050 kubelet[2526]: I0421 10:48:29.156963 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrk2z\" (UniqueName: \"kubernetes.io/projected/8a4497d2-deff-4a0e-bc07-c304fe7a4c6e-kube-api-access-mrk2z\") pod \"coredns-674b8bbfcf-kp9lg\" (UID: \"8a4497d2-deff-4a0e-bc07-c304fe7a4c6e\") " pod="kube-system/coredns-674b8bbfcf-kp9lg" Apr 21 10:48:29.157050 kubelet[2526]: I0421 10:48:29.157048 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90cacd3d-2a63-40bf-a72f-67fa5f649e88-calico-apiserver-certs\") pod \"calico-apiserver-c65dc69c4-dhsll\" (UID: \"90cacd3d-2a63-40bf-a72f-67fa5f649e88\") " pod="calico-system/calico-apiserver-c65dc69c4-dhsll" Apr 21 10:48:29.157463 kubelet[2526]: I0421 10:48:29.157088 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/97d51a84-9526-4bc2-b6fe-1533fc722244-calico-apiserver-certs\") pod \"calico-apiserver-c65dc69c4-h9lww\" (UID: \"97d51a84-9526-4bc2-b6fe-1533fc722244\") " pod="calico-system/calico-apiserver-c65dc69c4-h9lww" Apr 21 10:48:29.157463 kubelet[2526]: I0421 10:48:29.157136 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scw29\" (UniqueName: \"kubernetes.io/projected/97d51a84-9526-4bc2-b6fe-1533fc722244-kube-api-access-scw29\") pod \"calico-apiserver-c65dc69c4-h9lww\" (UID: \"97d51a84-9526-4bc2-b6fe-1533fc722244\") " pod="calico-system/calico-apiserver-c65dc69c4-h9lww" Apr 21 10:48:29.157463 kubelet[2526]: I0421 10:48:29.157162 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km4nm\" (UniqueName: \"kubernetes.io/projected/c39c62dd-7e89-4776-bcdb-f9958f5483e7-kube-api-access-km4nm\") pod \"calico-kube-controllers-8d98f594c-fjmst\" (UID: \"c39c62dd-7e89-4776-bcdb-f9958f5483e7\") " pod="calico-system/calico-kube-controllers-8d98f594c-fjmst" Apr 21 10:48:29.157463 kubelet[2526]: I0421 10:48:29.157192 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rt62\" (UniqueName: \"kubernetes.io/projected/dd1ae5d1-50ed-4274-8444-94f487a665ed-kube-api-access-5rt62\") pod \"goldmane-5b85766d88-hdntw\" (UID: \"dd1ae5d1-50ed-4274-8444-94f487a665ed\") " pod="calico-system/goldmane-5b85766d88-hdntw" Apr 21 10:48:29.157463 kubelet[2526]: I0421 10:48:29.157228 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1ae5d1-50ed-4274-8444-94f487a665ed-config\") pod \"goldmane-5b85766d88-hdntw\" (UID: \"dd1ae5d1-50ed-4274-8444-94f487a665ed\") " pod="calico-system/goldmane-5b85766d88-hdntw" Apr 21 10:48:29.158003 kubelet[2526]: I0421 10:48:29.157253 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ae5d1-50ed-4274-8444-94f487a665ed-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-hdntw\" (UID: \"dd1ae5d1-50ed-4274-8444-94f487a665ed\") " pod="calico-system/goldmane-5b85766d88-hdntw" Apr 21 10:48:29.158003 kubelet[2526]: I0421 10:48:29.157277 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/dd1ae5d1-50ed-4274-8444-94f487a665ed-goldmane-key-pair\") pod \"goldmane-5b85766d88-hdntw\" (UID: \"dd1ae5d1-50ed-4274-8444-94f487a665ed\") " pod="calico-system/goldmane-5b85766d88-hdntw" Apr 21 10:48:29.158003 kubelet[2526]: I0421 10:48:29.157328 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258vz\" (UniqueName: \"kubernetes.io/projected/a09f8f7d-21a8-4bda-84fc-452be1c07e7a-kube-api-access-258vz\") pod \"coredns-674b8bbfcf-zxzb7\" (UID: \"a09f8f7d-21a8-4bda-84fc-452be1c07e7a\") " pod="kube-system/coredns-674b8bbfcf-zxzb7" Apr 21 10:48:29.158003 kubelet[2526]: I0421 10:48:29.157357 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-whisker-ca-bundle\") pod \"whisker-7dc6cf9bc5-gw522\" (UID: \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\") " pod="calico-system/whisker-7dc6cf9bc5-gw522" Apr 21 10:48:29.158003 kubelet[2526]: I0421 10:48:29.157374 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c39c62dd-7e89-4776-bcdb-f9958f5483e7-tigera-ca-bundle\") pod \"calico-kube-controllers-8d98f594c-fjmst\" (UID: \"c39c62dd-7e89-4776-bcdb-f9958f5483e7\") " pod="calico-system/calico-kube-controllers-8d98f594c-fjmst" Apr 21 10:48:29.158093 kubelet[2526]: I0421 10:48:29.157388 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a4497d2-deff-4a0e-bc07-c304fe7a4c6e-config-volume\") pod \"coredns-674b8bbfcf-kp9lg\" (UID: \"8a4497d2-deff-4a0e-bc07-c304fe7a4c6e\") " pod="kube-system/coredns-674b8bbfcf-kp9lg" Apr 21 10:48:29.158093 kubelet[2526]: I0421 10:48:29.157426 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a09f8f7d-21a8-4bda-84fc-452be1c07e7a-config-volume\") pod \"coredns-674b8bbfcf-zxzb7\" (UID: \"a09f8f7d-21a8-4bda-84fc-452be1c07e7a\") " pod="kube-system/coredns-674b8bbfcf-zxzb7" Apr 21 10:48:29.158093 kubelet[2526]: I0421 10:48:29.157459 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-nginx-config\") pod \"whisker-7dc6cf9bc5-gw522\" (UID: \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\") " pod="calico-system/whisker-7dc6cf9bc5-gw522" Apr 21 10:48:29.158093 kubelet[2526]: I0421 10:48:29.157490 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l64tk\" (UniqueName: \"kubernetes.io/projected/90cacd3d-2a63-40bf-a72f-67fa5f649e88-kube-api-access-l64tk\") pod \"calico-apiserver-c65dc69c4-dhsll\" (UID: \"90cacd3d-2a63-40bf-a72f-67fa5f649e88\") " pod="calico-system/calico-apiserver-c65dc69c4-dhsll" Apr 21 10:48:29.158093 kubelet[2526]: I0421 10:48:29.157517 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-whisker-backend-key-pair\") pod \"whisker-7dc6cf9bc5-gw522\" (UID: \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\") " pod="calico-system/whisker-7dc6cf9bc5-gw522" Apr 21 10:48:29.158289 kubelet[2526]: I0421 10:48:29.157554 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpgkz\" (UniqueName: \"kubernetes.io/projected/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-kube-api-access-dpgkz\") pod \"whisker-7dc6cf9bc5-gw522\" (UID: \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\") " pod="calico-system/whisker-7dc6cf9bc5-gw522" Apr 21 10:48:29.296993 containerd[1480]: time="2026-04-21T10:48:29.296386475Z" level=info msg="CreateContainer within sandbox \"1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:48:29.311079 containerd[1480]: time="2026-04-21T10:48:29.311005699Z" level=info msg="CreateContainer within sandbox \"1e244a3f8eadd56bb9c05fe672c46183d7255c5d2ce2f629d6d1a2e5b77b4903\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0c039adc01194459335bd7fbab09eae94f55531f89108874f77f11982f9b2f94\"" Apr 21 10:48:29.311716 containerd[1480]: time="2026-04-21T10:48:29.311682516Z" level=info msg="StartContainer for \"0c039adc01194459335bd7fbab09eae94f55531f89108874f77f11982f9b2f94\"" Apr 21 10:48:29.334083 systemd[1]: Started cri-containerd-0c039adc01194459335bd7fbab09eae94f55531f89108874f77f11982f9b2f94.scope - libcontainer container 0c039adc01194459335bd7fbab09eae94f55531f89108874f77f11982f9b2f94. Apr 21 10:48:29.358442 containerd[1480]: time="2026-04-21T10:48:29.358402277Z" level=info msg="StartContainer for \"0c039adc01194459335bd7fbab09eae94f55531f89108874f77f11982f9b2f94\" returns successfully" Apr 21 10:48:29.367782 kubelet[2526]: E0421 10:48:29.367706 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:29.368351 containerd[1480]: time="2026-04-21T10:48:29.368302711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kp9lg,Uid:8a4497d2-deff-4a0e-bc07-c304fe7a4c6e,Namespace:kube-system,Attempt:0,}" Apr 21 10:48:29.374798 kubelet[2526]: E0421 10:48:29.374685 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:29.376022 containerd[1480]: time="2026-04-21T10:48:29.375989377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zxzb7,Uid:a09f8f7d-21a8-4bda-84fc-452be1c07e7a,Namespace:kube-system,Attempt:0,}" Apr 21 10:48:29.380397 containerd[1480]: time="2026-04-21T10:48:29.380280593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d98f594c-fjmst,Uid:c39c62dd-7e89-4776-bcdb-f9958f5483e7,Namespace:calico-system,Attempt:0,}" Apr 21 10:48:29.385820 containerd[1480]: time="2026-04-21T10:48:29.385585910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c65dc69c4-dhsll,Uid:90cacd3d-2a63-40bf-a72f-67fa5f649e88,Namespace:calico-system,Attempt:0,}" Apr 21 10:48:29.390741 containerd[1480]: time="2026-04-21T10:48:29.390295773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hdntw,Uid:dd1ae5d1-50ed-4274-8444-94f487a665ed,Namespace:calico-system,Attempt:0,}" Apr 21 10:48:29.395285 containerd[1480]: time="2026-04-21T10:48:29.395241992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dc6cf9bc5-gw522,Uid:ccfa94ae-c2bb-42dc-bf66-d035a841d8b5,Namespace:calico-system,Attempt:0,}" Apr 21 10:48:29.401035 containerd[1480]: time="2026-04-21T10:48:29.400812442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c65dc69c4-h9lww,Uid:97d51a84-9526-4bc2-b6fe-1533fc722244,Namespace:calico-system,Attempt:0,}" Apr 21 10:48:30.151203 systemd[1]: Created slice kubepods-besteffort-pod4cde7865_530b_49b0_8623_5246c29d042b.slice - libcontainer container kubepods-besteffort-pod4cde7865_530b_49b0_8623_5246c29d042b.slice. Apr 21 10:48:30.153590 containerd[1480]: time="2026-04-21T10:48:30.153511554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzjww,Uid:4cde7865-530b-49b0-8623-5246c29d042b,Namespace:calico-system,Attempt:0,}" Apr 21 10:48:30.309483 kubelet[2526]: I0421 10:48:30.307782 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xwrsn" podStartSLOduration=2.7262734 podStartE2EDuration="36.307769841s" podCreationTimestamp="2026-04-21 10:47:54 +0000 UTC" firstStartedPulling="2026-04-21 10:47:54.844045887 +0000 UTC m=+16.770343772" lastFinishedPulling="2026-04-21 10:48:28.425542323 +0000 UTC m=+50.351840213" observedRunningTime="2026-04-21 10:48:30.306543149 +0000 UTC m=+52.232841052" watchObservedRunningTime="2026-04-21 10:48:30.307769841 +0000 UTC m=+52.234067736" Apr 21 10:48:30.850737 systemd-networkd[1404]: caliaeb1eaee7a1: Link UP Apr 21 10:48:30.851227 systemd-networkd[1404]: caliaeb1eaee7a1: Gained carrier Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.602 [ERROR][3524] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.651 [INFO][3524] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0 calico-kube-controllers-8d98f594c- calico-system c39c62dd-7e89-4776-bcdb-f9958f5483e7 962 0 2026-04-21 10:47:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8d98f594c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8d98f594c-fjmst eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaeb1eaee7a1 [] [] }} ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Namespace="calico-system" Pod="calico-kube-controllers-8d98f594c-fjmst" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.651 [INFO][3524] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Namespace="calico-system" Pod="calico-kube-controllers-8d98f594c-fjmst" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.711 [INFO][3683] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" HandleID="k8s-pod-network.8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Workload="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.720 [INFO][3683] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" HandleID="k8s-pod-network.8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Workload="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7db0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8d98f594c-fjmst", "timestamp":"2026-04-21 10:48:29.711209189 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00059fa20)} Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.720 [INFO][3683] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.721 [INFO][3683] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.721 [INFO][3683] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.725 [INFO][3683] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" host="localhost" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.742 [INFO][3683] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:29.797 [INFO][3683] ipam/ipam.go 1965: Failed to create global IPAM config; another node got there first. Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.802 [INFO][3683] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.805 [INFO][3683] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.807 [INFO][3683] ipam/ipam.go 588: Found unclaimed block in 2.376617ms host="localhost" subnet=192.168.88.128/26 Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.807 [INFO][3683] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.812 [INFO][3683] ipam/ipam_block_reader_writer.go 186: Block affinity already exists, getting existing affinity host="localhost" subnet=192.168.88.128/26 Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.815 [INFO][3683] ipam/ipam_block_reader_writer.go 194: Got existing affinity host="localhost" subnet=192.168.88.128/26 Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.815 [INFO][3683] ipam/ipam_block_reader_writer.go 202: Existing affinity is already confirmed host="localhost" subnet=192.168.88.128/26 Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.815 [INFO][3683] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.818 [INFO][3683] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.818 [INFO][3683] ipam/ipam.go 623: Block '192.168.88.128/26' has 63 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.818 [INFO][3683] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" host="localhost" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.821 [INFO][3683] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103 Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.826 [INFO][3683] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" host="localhost" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.831 [INFO][3683] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" host="localhost" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.831 [INFO][3683] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" host="localhost" Apr 21 10:48:30.868982 containerd[1480]: 2026-04-21 10:48:30.831 [INFO][3683] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:30.869995 containerd[1480]: 2026-04-21 10:48:30.831 [INFO][3683] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" HandleID="k8s-pod-network.8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Workload="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0" Apr 21 10:48:30.869995 containerd[1480]: 2026-04-21 10:48:30.836 [INFO][3524] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Namespace="calico-system" Pod="calico-kube-controllers-8d98f594c-fjmst" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0", GenerateName:"calico-kube-controllers-8d98f594c-", Namespace:"calico-system", SelfLink:"", UID:"c39c62dd-7e89-4776-bcdb-f9958f5483e7", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d98f594c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8d98f594c-fjmst", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaeb1eaee7a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:30.869995 containerd[1480]: 2026-04-21 10:48:30.836 [INFO][3524] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Namespace="calico-system" Pod="calico-kube-controllers-8d98f594c-fjmst" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0" Apr 21 10:48:30.869995 containerd[1480]: 2026-04-21 10:48:30.836 [INFO][3524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaeb1eaee7a1 ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Namespace="calico-system" Pod="calico-kube-controllers-8d98f594c-fjmst" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0" Apr 21 10:48:30.869995 containerd[1480]: 2026-04-21 10:48:30.850 [INFO][3524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Namespace="calico-system" Pod="calico-kube-controllers-8d98f594c-fjmst" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0" Apr 21 10:48:30.869995 containerd[1480]: 2026-04-21 10:48:30.852 [INFO][3524] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Namespace="calico-system" Pod="calico-kube-controllers-8d98f594c-fjmst" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0", GenerateName:"calico-kube-controllers-8d98f594c-", Namespace:"calico-system", SelfLink:"", UID:"c39c62dd-7e89-4776-bcdb-f9958f5483e7", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d98f594c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103", Pod:"calico-kube-controllers-8d98f594c-fjmst", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaeb1eaee7a1", MAC:"f2:49:fe:89:a6:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:30.869995 containerd[1480]: 2026-04-21 10:48:30.864 [INFO][3524] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103" Namespace="calico-system" Pod="calico-kube-controllers-8d98f594c-fjmst" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8d98f594c--fjmst-eth0" Apr 21 10:48:30.890553 systemd-networkd[1404]: cali37cda7847d6: Link UP Apr 21 10:48:30.890891 systemd-networkd[1404]: cali37cda7847d6: Gained carrier Apr 21 10:48:30.902621 containerd[1480]: time="2026-04-21T10:48:30.902378849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:48:30.902621 containerd[1480]: time="2026-04-21T10:48:30.902427708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:48:30.902621 containerd[1480]: time="2026-04-21T10:48:30.902439811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:30.902749 containerd[1480]: time="2026-04-21T10:48:30.902624471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:29.599 [ERROR][3553] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:29.656 [INFO][3553] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0 calico-apiserver-c65dc69c4- calico-system 97d51a84-9526-4bc2-b6fe-1533fc722244 965 0 2026-04-21 10:47:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c65dc69c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c65dc69c4-h9lww eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali37cda7847d6 [] [] }} ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-h9lww" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:29.656 [INFO][3553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-h9lww" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:29.737 [INFO][3680] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" HandleID="k8s-pod-network.deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Workload="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:29.796 [INFO][3680] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" HandleID="k8s-pod-network.deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Workload="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277410), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-c65dc69c4-h9lww", "timestamp":"2026-04-21 10:48:29.737271807 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000354f20)} Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:29.796 [INFO][3680] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.831 [INFO][3680] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.831 [INFO][3680] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.836 [INFO][3680] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" host="localhost" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.842 [INFO][3680] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.851 [INFO][3680] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.858 [INFO][3680] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.860 [INFO][3680] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.860 [INFO][3680] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" host="localhost" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.865 [INFO][3680] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454 Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.869 [INFO][3680] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" host="localhost" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.882 [INFO][3680] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" host="localhost" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.882 [INFO][3680] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" host="localhost" Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.882 [INFO][3680] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:30.931279 containerd[1480]: 2026-04-21 10:48:30.882 [INFO][3680] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" HandleID="k8s-pod-network.deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Workload="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0" Apr 21 10:48:30.931802 containerd[1480]: 2026-04-21 10:48:30.889 [INFO][3553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-h9lww" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0", GenerateName:"calico-apiserver-c65dc69c4-", Namespace:"calico-system", SelfLink:"", UID:"97d51a84-9526-4bc2-b6fe-1533fc722244", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c65dc69c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c65dc69c4-h9lww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali37cda7847d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:30.931802 containerd[1480]: 2026-04-21 10:48:30.889 [INFO][3553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-h9lww" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0" Apr 21 10:48:30.931802 containerd[1480]: 2026-04-21 10:48:30.889 [INFO][3553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37cda7847d6 ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-h9lww" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0" Apr 21 10:48:30.931802 containerd[1480]: 2026-04-21 10:48:30.893 [INFO][3553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-h9lww" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0" Apr 21 10:48:30.931802 containerd[1480]: 2026-04-21 10:48:30.893 [INFO][3553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-h9lww" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0", GenerateName:"calico-apiserver-c65dc69c4-", Namespace:"calico-system", SelfLink:"", UID:"97d51a84-9526-4bc2-b6fe-1533fc722244", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c65dc69c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454", Pod:"calico-apiserver-c65dc69c4-h9lww", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali37cda7847d6", MAC:"3a:b0:ce:72:29:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:30.931802 containerd[1480]: 2026-04-21 10:48:30.923 [INFO][3553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-h9lww" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--h9lww-eth0" Apr 21 10:48:30.947258 systemd[1]: Started cri-containerd-8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103.scope - libcontainer container 8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103. Apr 21 10:48:30.977545 containerd[1480]: time="2026-04-21T10:48:30.977289722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:48:30.977545 containerd[1480]: time="2026-04-21T10:48:30.977339701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:48:30.977545 containerd[1480]: time="2026-04-21T10:48:30.977348660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:30.977545 containerd[1480]: time="2026-04-21T10:48:30.977413187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:30.994361 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:48:31.004445 systemd-networkd[1404]: cali06cecbb44f0: Link UP Apr 21 10:48:31.007323 systemd-networkd[1404]: cali06cecbb44f0: Gained carrier Apr 21 10:48:31.018198 systemd[1]: Started cri-containerd-deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454.scope - libcontainer container deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454. Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:29.784 [INFO][3668] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:29.785 [INFO][3668] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" iface="eth0" netns="/var/run/netns/cni-07a6813c-a677-5a1e-6691-99e8d810f656" Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:29.786 [INFO][3668] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" iface="eth0" netns="/var/run/netns/cni-07a6813c-a677-5a1e-6691-99e8d810f656" Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:29.786 [INFO][3668] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" iface="eth0" netns="/var/run/netns/cni-07a6813c-a677-5a1e-6691-99e8d810f656" Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:29.786 [INFO][3668] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:29.786 [INFO][3668] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:29.834 [INFO][3716] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" HandleID="k8s-pod-network.267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" Workload="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:29.834 [INFO][3716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:30.979 [INFO][3716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:30.987 [WARNING][3716] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" HandleID="k8s-pod-network.267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" Workload="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:30.987 [INFO][3716] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" HandleID="k8s-pod-network.267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" Workload="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:30.991 [INFO][3716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:31.030474 containerd[1480]: 2026-04-21 10:48:31.002 [INFO][3668] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf" Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:29.792 [INFO][3654] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:29.792 [INFO][3654] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" iface="eth0" netns="/var/run/netns/cni-11ef0636-348f-9a54-5f31-2b1057e1e67f" Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:29.792 [INFO][3654] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" iface="eth0" netns="/var/run/netns/cni-11ef0636-348f-9a54-5f31-2b1057e1e67f" Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:29.793 [INFO][3654] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" iface="eth0" netns="/var/run/netns/cni-11ef0636-348f-9a54-5f31-2b1057e1e67f" Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:29.793 [INFO][3654] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:29.793 [INFO][3654] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:29.831 [INFO][3726] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" HandleID="k8s-pod-network.78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" Workload="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:29.837 [INFO][3726] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:30.992 [INFO][3726] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:31.001 [WARNING][3726] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" HandleID="k8s-pod-network.78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" Workload="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:31.001 [INFO][3726] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" HandleID="k8s-pod-network.78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" Workload="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:31.005 [INFO][3726] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:31.036458 containerd[1480]: 2026-04-21 10:48:31.014 [INFO][3654] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0" Apr 21 10:48:31.041381 containerd[1480]: time="2026-04-21T10:48:31.041240705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zxzb7,Uid:a09f8f7d-21a8-4bda-84fc-452be1c07e7a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:48:31.044779 containerd[1480]: time="2026-04-21T10:48:31.044733475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c65dc69c4-dhsll,Uid:90cacd3d-2a63-40bf-a72f-67fa5f649e88,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:29.611 [ERROR][3548] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:29.657 [INFO][3548] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0 whisker-7dc6cf9bc5- calico-system ccfa94ae-c2bb-42dc-bf66-d035a841d8b5 983 0 2026-04-21 10:48:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7dc6cf9bc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7dc6cf9bc5-gw522 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali06cecbb44f0 [] [] }} ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Namespace="calico-system" Pod="whisker-7dc6cf9bc5-gw522" WorkloadEndpoint="localhost-k8s-whisker--7dc6cf9bc5--gw522-" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:29.657 [INFO][3548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Namespace="calico-system" Pod="whisker-7dc6cf9bc5-gw522" WorkloadEndpoint="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:29.794 [INFO][3682] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" HandleID="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Workload="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:29.818 [INFO][3682] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" HandleID="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Workload="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e38e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7dc6cf9bc5-gw522", "timestamp":"2026-04-21 10:48:29.794425017 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000330580)} Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:29.818 [INFO][3682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.883 [INFO][3682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.883 [INFO][3682] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.945 [INFO][3682] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" host="localhost" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.952 [INFO][3682] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.957 [INFO][3682] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.959 [INFO][3682] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.961 [INFO][3682] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.961 [INFO][3682] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" host="localhost" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.963 [INFO][3682] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9 Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.968 [INFO][3682] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" host="localhost" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.979 [INFO][3682] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" host="localhost" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.979 [INFO][3682] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" host="localhost" Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.979 [INFO][3682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:31.048896 containerd[1480]: 2026-04-21 10:48:30.979 [INFO][3682] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" HandleID="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Workload="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" Apr 21 10:48:31.049360 containerd[1480]: 2026-04-21 10:48:30.995 [INFO][3548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Namespace="calico-system" Pod="whisker-7dc6cf9bc5-gw522" WorkloadEndpoint="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0", GenerateName:"whisker-7dc6cf9bc5-", Namespace:"calico-system", SelfLink:"", UID:"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dc6cf9bc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7dc6cf9bc5-gw522", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali06cecbb44f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.049360 containerd[1480]: 2026-04-21 10:48:30.995 [INFO][3548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Namespace="calico-system" Pod="whisker-7dc6cf9bc5-gw522" WorkloadEndpoint="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" Apr 21 10:48:31.049360 containerd[1480]: 2026-04-21 10:48:30.995 [INFO][3548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06cecbb44f0 ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Namespace="calico-system" Pod="whisker-7dc6cf9bc5-gw522" WorkloadEndpoint="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" Apr 21 10:48:31.049360 containerd[1480]: 2026-04-21 10:48:31.007 [INFO][3548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Namespace="calico-system" Pod="whisker-7dc6cf9bc5-gw522" WorkloadEndpoint="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" Apr 21 10:48:31.049360 containerd[1480]: 2026-04-21 10:48:31.010 [INFO][3548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Namespace="calico-system" Pod="whisker-7dc6cf9bc5-gw522" WorkloadEndpoint="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0", GenerateName:"whisker-7dc6cf9bc5-", Namespace:"calico-system", SelfLink:"", UID:"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 48, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dc6cf9bc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9", Pod:"whisker-7dc6cf9bc5-gw522", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali06cecbb44f0", MAC:"e2:f5:96:09:65:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.049360 containerd[1480]: 2026-04-21 10:48:31.035 [INFO][3548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Namespace="calico-system" Pod="whisker-7dc6cf9bc5-gw522" WorkloadEndpoint="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:29.782 [INFO][3667] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:29.784 [INFO][3667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" iface="eth0" netns="/var/run/netns/cni-bf2ba927-aff4-fb40-0bf1-45d77c0cc984" Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:29.785 [INFO][3667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" iface="eth0" netns="/var/run/netns/cni-bf2ba927-aff4-fb40-0bf1-45d77c0cc984" Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:29.788 [INFO][3667] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" iface="eth0" netns="/var/run/netns/cni-bf2ba927-aff4-fb40-0bf1-45d77c0cc984" Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:29.789 [INFO][3667] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:29.789 [INFO][3667] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:29.838 [INFO][3724] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" HandleID="k8s-pod-network.ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" Workload="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:29.838 [INFO][3724] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:31.010 [INFO][3724] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:31.053 [WARNING][3724] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" HandleID="k8s-pod-network.ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" Workload="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:31.053 [INFO][3724] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" HandleID="k8s-pod-network.ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" Workload="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:31.054 [INFO][3724] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:31.059217 containerd[1480]: 2026-04-21 10:48:31.057 [INFO][3667] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e" Apr 21 10:48:31.061876 kubelet[2526]: E0421 10:48:31.061754 2526 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:48:31.062076 kubelet[2526]: E0421 10:48:31.061890 2526 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-c65dc69c4-dhsll" Apr 21 10:48:31.062076 kubelet[2526]: E0421 10:48:31.061915 2526 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-c65dc69c4-dhsll" Apr 21 10:48:31.062076 kubelet[2526]: E0421 10:48:31.061966 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c65dc69c4-dhsll_calico-system(90cacd3d-2a63-40bf-a72f-67fa5f649e88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c65dc69c4-dhsll_calico-system(90cacd3d-2a63-40bf-a72f-67fa5f649e88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-c65dc69c4-dhsll" podUID="90cacd3d-2a63-40bf-a72f-67fa5f649e88" Apr 21 10:48:31.062239 kubelet[2526]: E0421 10:48:31.062028 2526 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:48:31.062239 kubelet[2526]: E0421 10:48:31.062050 2526 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zxzb7" Apr 21 10:48:31.062239 kubelet[2526]: E0421 10:48:31.062061 2526 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zxzb7" Apr 21 10:48:31.062318 kubelet[2526]: E0421 10:48:31.062085 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zxzb7_kube-system(a09f8f7d-21a8-4bda-84fc-452be1c07e7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zxzb7_kube-system(a09f8f7d-21a8-4bda-84fc-452be1c07e7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zxzb7" podUID="a09f8f7d-21a8-4bda-84fc-452be1c07e7a" Apr 21 10:48:31.064040 containerd[1480]: time="2026-04-21T10:48:31.063889062Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kp9lg,Uid:8a4497d2-deff-4a0e-bc07-c304fe7a4c6e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:48:31.064756 kubelet[2526]: E0421 10:48:31.064564 2526 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:48:31.064756 kubelet[2526]: E0421 10:48:31.064683 2526 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kp9lg" Apr 21 10:48:31.064756 kubelet[2526]: E0421 10:48:31.064706 2526 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kp9lg" Apr 21 10:48:31.065520 kubelet[2526]: E0421 10:48:31.065436 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kp9lg_kube-system(8a4497d2-deff-4a0e-bc07-c304fe7a4c6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kp9lg_kube-system(8a4497d2-deff-4a0e-bc07-c304fe7a4c6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kp9lg" podUID="8a4497d2-deff-4a0e-bc07-c304fe7a4c6e" Apr 21 10:48:31.071388 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:48:31.092292 containerd[1480]: time="2026-04-21T10:48:31.091683873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:48:31.092292 containerd[1480]: time="2026-04-21T10:48:31.091759536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:48:31.092292 containerd[1480]: time="2026-04-21T10:48:31.091772511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.092292 containerd[1480]: time="2026-04-21T10:48:31.091959907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:29.816 [INFO][3660] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:29.816 [INFO][3660] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" iface="eth0" netns="/var/run/netns/cni-ceb0fdfc-c424-c66f-e8cf-78be603a91b8" Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:29.817 [INFO][3660] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" iface="eth0" netns="/var/run/netns/cni-ceb0fdfc-c424-c66f-e8cf-78be603a91b8" Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:29.817 [INFO][3660] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" iface="eth0" netns="/var/run/netns/cni-ceb0fdfc-c424-c66f-e8cf-78be603a91b8" Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:29.817 [INFO][3660] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:29.817 [INFO][3660] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:29.860 [INFO][3739] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" HandleID="k8s-pod-network.b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" Workload="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:29.860 [INFO][3739] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:31.056 [INFO][3739] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:31.076 [WARNING][3739] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" HandleID="k8s-pod-network.b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" Workload="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:31.076 [INFO][3739] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" HandleID="k8s-pod-network.b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" Workload="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:31.081 [INFO][3739] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:31.103721 containerd[1480]: 2026-04-21 10:48:31.088 [INFO][3660] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f" Apr 21 10:48:31.103721 containerd[1480]: time="2026-04-21T10:48:31.103470980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d98f594c-fjmst,Uid:c39c62dd-7e89-4776-bcdb-f9958f5483e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103\"" Apr 21 10:48:31.111357 containerd[1480]: time="2026-04-21T10:48:31.111328113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hdntw,Uid:dd1ae5d1-50ed-4274-8444-94f487a665ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:48:31.111722 containerd[1480]: time="2026-04-21T10:48:31.111452185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:48:31.111893 kubelet[2526]: E0421 10:48:31.111789 2526 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:48:31.112632 kubelet[2526]: E0421 10:48:31.111841 2526 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-hdntw" Apr 21 10:48:31.112632 kubelet[2526]: E0421 10:48:31.112622 2526 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-hdntw" Apr 21 10:48:31.112986 kubelet[2526]: E0421 10:48:31.112748 2526 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-hdntw_calico-system(dd1ae5d1-50ed-4274-8444-94f487a665ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-hdntw_calico-system(dd1ae5d1-50ed-4274-8444-94f487a665ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-hdntw" podUID="dd1ae5d1-50ed-4274-8444-94f487a665ed" Apr 21 10:48:31.117632 systemd[1]: Started cri-containerd-8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9.scope - libcontainer container 8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9. Apr 21 10:48:31.161342 containerd[1480]: time="2026-04-21T10:48:31.161291225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c65dc69c4-h9lww,Uid:97d51a84-9526-4bc2-b6fe-1533fc722244,Namespace:calico-system,Attempt:0,} returns sandbox id \"deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454\"" Apr 21 10:48:31.174092 systemd-networkd[1404]: calie4022b043e1: Link UP Apr 21 10:48:31.174422 systemd-networkd[1404]: calie4022b043e1: Gained carrier Apr 21 10:48:31.185457 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:30.202 [ERROR][3761] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:30.211 [INFO][3761] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mzjww-eth0 csi-node-driver- calico-system 4cde7865-530b-49b0-8623-5246c29d042b 712 0 2026-04-21 10:47:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mzjww eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie4022b043e1 [] [] }} ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Namespace="calico-system" Pod="csi-node-driver-mzjww" WorkloadEndpoint="localhost-k8s-csi--node--driver--mzjww-" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:30.211 [INFO][3761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Namespace="calico-system" Pod="csi-node-driver-mzjww" WorkloadEndpoint="localhost-k8s-csi--node--driver--mzjww-eth0" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:30.233 [INFO][3775] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" HandleID="k8s-pod-network.6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Workload="localhost-k8s-csi--node--driver--mzjww-eth0" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:30.240 [INFO][3775] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" HandleID="k8s-pod-network.6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Workload="localhost-k8s-csi--node--driver--mzjww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mzjww", "timestamp":"2026-04-21 10:48:30.233975832 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017b4a0)} Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:30.240 [INFO][3775] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.081 [INFO][3775] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.082 [INFO][3775] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.091 [INFO][3775] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" host="localhost" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.105 [INFO][3775] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.123 [INFO][3775] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.128 [INFO][3775] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.131 [INFO][3775] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.131 [INFO][3775] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" host="localhost" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.137 [INFO][3775] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92 Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.148 [INFO][3775] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" host="localhost" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.155 [INFO][3775] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" host="localhost" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.155 [INFO][3775] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" host="localhost" Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.156 [INFO][3775] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:31.192939 containerd[1480]: 2026-04-21 10:48:31.156 [INFO][3775] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" HandleID="k8s-pod-network.6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Workload="localhost-k8s-csi--node--driver--mzjww-eth0" Apr 21 10:48:31.193647 containerd[1480]: 2026-04-21 10:48:31.163 [INFO][3761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Namespace="calico-system" Pod="csi-node-driver-mzjww" WorkloadEndpoint="localhost-k8s-csi--node--driver--mzjww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mzjww-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cde7865-530b-49b0-8623-5246c29d042b", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mzjww", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4022b043e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.193647 containerd[1480]: 2026-04-21 10:48:31.164 [INFO][3761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Namespace="calico-system" Pod="csi-node-driver-mzjww" WorkloadEndpoint="localhost-k8s-csi--node--driver--mzjww-eth0" Apr 21 10:48:31.193647 containerd[1480]: 2026-04-21 10:48:31.165 [INFO][3761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4022b043e1 ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Namespace="calico-system" Pod="csi-node-driver-mzjww" WorkloadEndpoint="localhost-k8s-csi--node--driver--mzjww-eth0" Apr 21 10:48:31.193647 containerd[1480]: 2026-04-21 10:48:31.178 [INFO][3761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Namespace="calico-system" Pod="csi-node-driver-mzjww" WorkloadEndpoint="localhost-k8s-csi--node--driver--mzjww-eth0" Apr 21 10:48:31.193647 containerd[1480]: 2026-04-21 10:48:31.178 [INFO][3761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Namespace="calico-system" Pod="csi-node-driver-mzjww" WorkloadEndpoint="localhost-k8s-csi--node--driver--mzjww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mzjww-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cde7865-530b-49b0-8623-5246c29d042b", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92", Pod:"csi-node-driver-mzjww", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4022b043e1", MAC:"fa:fe:f7:a9:1b:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.193647 containerd[1480]: 2026-04-21 10:48:31.189 [INFO][3761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92" Namespace="calico-system" Pod="csi-node-driver-mzjww" WorkloadEndpoint="localhost-k8s-csi--node--driver--mzjww-eth0" Apr 21 10:48:31.218965 containerd[1480]: time="2026-04-21T10:48:31.218841910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:48:31.218965 containerd[1480]: time="2026-04-21T10:48:31.218921395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:48:31.218965 containerd[1480]: time="2026-04-21T10:48:31.218929608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.219101 containerd[1480]: time="2026-04-21T10:48:31.218982759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.233132 kernel: calico-node[3921]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:48:31.237968 containerd[1480]: time="2026-04-21T10:48:31.236518348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dc6cf9bc5-gw522,Uid:ccfa94ae-c2bb-42dc-bf66-d035a841d8b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9\"" Apr 21 10:48:31.252166 systemd[1]: Started cri-containerd-6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92.scope - libcontainer container 6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92. Apr 21 10:48:31.308411 kubelet[2526]: E0421 10:48:31.308358 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:31.310006 containerd[1480]: time="2026-04-21T10:48:31.309380889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hdntw,Uid:dd1ae5d1-50ed-4274-8444-94f487a665ed,Namespace:calico-system,Attempt:0,}" Apr 21 10:48:31.310006 containerd[1480]: time="2026-04-21T10:48:31.309911458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c65dc69c4-dhsll,Uid:90cacd3d-2a63-40bf-a72f-67fa5f649e88,Namespace:calico-system,Attempt:0,}" Apr 21 10:48:31.310161 kubelet[2526]: E0421 10:48:31.308918 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:31.310634 containerd[1480]: time="2026-04-21T10:48:31.310615050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zxzb7,Uid:a09f8f7d-21a8-4bda-84fc-452be1c07e7a,Namespace:kube-system,Attempt:0,}" Apr 21 10:48:31.311013 containerd[1480]: time="2026-04-21T10:48:31.310960454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kp9lg,Uid:8a4497d2-deff-4a0e-bc07-c304fe7a4c6e,Namespace:kube-system,Attempt:0,}" Apr 21 10:48:31.360107 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:48:31.424040 containerd[1480]: time="2026-04-21T10:48:31.424004596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzjww,Uid:4cde7865-530b-49b0-8623-5246c29d042b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92\"" Apr 21 10:48:31.464095 systemd[1]: run-netns-cni\x2dceb0fdfc\x2dc424\x2dc66f\x2de8cf\x2d78be603a91b8.mount: Deactivated successfully. Apr 21 10:48:31.467705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1a4e1b8296a7075a833f22fcf9ad64e114ff5fcd6453d9e0416b550c183fb3f-shm.mount: Deactivated successfully. Apr 21 10:48:31.467786 systemd[1]: run-netns-cni\x2d11ef0636\x2d348f\x2d9a54\x2d5f31\x2d2b1057e1e67f.mount: Deactivated successfully. Apr 21 10:48:31.467825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78d29bae59a4769ac94e406e25415eafda4a5ea0cef2f4e51bd26724ad0325f0-shm.mount: Deactivated successfully. Apr 21 10:48:31.467917 systemd[1]: run-netns-cni\x2d07a6813c\x2da677\x2d5a1e\x2d6691\x2d99e8d810f656.mount: Deactivated successfully. Apr 21 10:48:31.467955 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-267bb75cf5dbc170d373a89b1ab5e61aa60a5f0b1443c6f9c54653a7f7b64eaf-shm.mount: Deactivated successfully. Apr 21 10:48:31.467997 systemd[1]: run-netns-cni\x2dbf2ba927\x2daff4\x2dfb40\x2d0bf1\x2d45d77c0cc984.mount: Deactivated successfully. Apr 21 10:48:31.468032 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ccb9918dcb2fef53d7e07a4bf15694e120e2ef1e42e1d1b413d0e15e6394238e-shm.mount: Deactivated successfully. Apr 21 10:48:31.566788 systemd-networkd[1404]: cali9a04f1ab1df: Link UP Apr 21 10:48:31.567505 systemd-networkd[1404]: cali9a04f1ab1df: Gained carrier Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.455 [INFO][4160] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0 coredns-674b8bbfcf- kube-system a09f8f7d-21a8-4bda-84fc-452be1c07e7a 991 0 2026-04-21 10:47:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zxzb7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9a04f1ab1df [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxzb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxzb7-" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.456 [INFO][4160] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxzb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.505 [INFO][4211] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" HandleID="k8s-pod-network.d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Workload="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.512 [INFO][4211] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" HandleID="k8s-pod-network.d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Workload="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366550), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zxzb7", "timestamp":"2026-04-21 10:48:31.505528822 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000260580)} Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.512 [INFO][4211] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.512 [INFO][4211] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.512 [INFO][4211] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.514 [INFO][4211] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" host="localhost" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.520 [INFO][4211] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.528 [INFO][4211] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.532 [INFO][4211] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.539 [INFO][4211] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.539 [INFO][4211] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" host="localhost" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.544 [INFO][4211] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.551 [INFO][4211] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" host="localhost" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.559 [INFO][4211] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" host="localhost" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.559 [INFO][4211] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" host="localhost" Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.559 [INFO][4211] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:31.581329 containerd[1480]: 2026-04-21 10:48:31.559 [INFO][4211] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" HandleID="k8s-pod-network.d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Workload="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" Apr 21 10:48:31.581788 containerd[1480]: 2026-04-21 10:48:31.562 [INFO][4160] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxzb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a09f8f7d-21a8-4bda-84fc-452be1c07e7a", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zxzb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a04f1ab1df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.581788 containerd[1480]: 2026-04-21 10:48:31.562 [INFO][4160] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxzb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" Apr 21 10:48:31.581788 containerd[1480]: 2026-04-21 10:48:31.562 [INFO][4160] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a04f1ab1df ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxzb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" Apr 21 10:48:31.581788 containerd[1480]: 2026-04-21 10:48:31.568 [INFO][4160] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxzb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" Apr 21 10:48:31.581788 containerd[1480]: 2026-04-21 10:48:31.568 [INFO][4160] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxzb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a09f8f7d-21a8-4bda-84fc-452be1c07e7a", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b", Pod:"coredns-674b8bbfcf-zxzb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a04f1ab1df", MAC:"c2:72:a9:2c:15:b7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.581788 containerd[1480]: 2026-04-21 10:48:31.577 [INFO][4160] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zxzb7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zxzb7-eth0" Apr 21 10:48:31.625937 containerd[1480]: time="2026-04-21T10:48:31.625629016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:48:31.625937 containerd[1480]: time="2026-04-21T10:48:31.625700523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:48:31.625937 containerd[1480]: time="2026-04-21T10:48:31.625712973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.628354 containerd[1480]: time="2026-04-21T10:48:31.627488285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.658064 systemd[1]: Started cri-containerd-d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b.scope - libcontainer container d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b. Apr 21 10:48:31.669730 systemd-networkd[1404]: calif5198ef5700: Link UP Apr 21 10:48:31.673768 systemd-networkd[1404]: calif5198ef5700: Gained carrier Apr 21 10:48:31.674407 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.470 [INFO][4138] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0 calico-apiserver-c65dc69c4- calico-system 90cacd3d-2a63-40bf-a72f-67fa5f649e88 992 0 2026-04-21 10:47:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c65dc69c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c65dc69c4-dhsll eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif5198ef5700 [] [] }} ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-dhsll" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.470 [INFO][4138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-dhsll" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.538 [INFO][4221] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" HandleID="k8s-pod-network.bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Workload="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.548 [INFO][4221] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" HandleID="k8s-pod-network.bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Workload="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367d00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-c65dc69c4-dhsll", "timestamp":"2026-04-21 10:48:31.538744096 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004c5b80)} Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.548 [INFO][4221] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.560 [INFO][4221] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.560 [INFO][4221] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.616 [INFO][4221] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" host="localhost" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.621 [INFO][4221] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.628 [INFO][4221] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.633 [INFO][4221] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.637 [INFO][4221] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.637 [INFO][4221] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" host="localhost" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.640 [INFO][4221] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43 Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.648 [INFO][4221] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" host="localhost" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.659 [INFO][4221] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" host="localhost" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.659 [INFO][4221] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" host="localhost" Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.660 [INFO][4221] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:31.692010 containerd[1480]: 2026-04-21 10:48:31.660 [INFO][4221] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" HandleID="k8s-pod-network.bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Workload="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" Apr 21 10:48:31.693314 containerd[1480]: 2026-04-21 10:48:31.663 [INFO][4138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-dhsll" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0", GenerateName:"calico-apiserver-c65dc69c4-", Namespace:"calico-system", SelfLink:"", UID:"90cacd3d-2a63-40bf-a72f-67fa5f649e88", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c65dc69c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c65dc69c4-dhsll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif5198ef5700", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.693314 containerd[1480]: 2026-04-21 10:48:31.664 [INFO][4138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-dhsll" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" Apr 21 10:48:31.693314 containerd[1480]: 2026-04-21 10:48:31.664 [INFO][4138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5198ef5700 ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-dhsll" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" Apr 21 10:48:31.693314 containerd[1480]: 2026-04-21 10:48:31.676 [INFO][4138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-dhsll" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" Apr 21 10:48:31.693314 containerd[1480]: 2026-04-21 10:48:31.676 [INFO][4138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-dhsll" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0", GenerateName:"calico-apiserver-c65dc69c4-", Namespace:"calico-system", SelfLink:"", UID:"90cacd3d-2a63-40bf-a72f-67fa5f649e88", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c65dc69c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43", Pod:"calico-apiserver-c65dc69c4-dhsll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif5198ef5700", MAC:"de:b7:25:d7:4b:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.693314 containerd[1480]: 2026-04-21 10:48:31.689 [INFO][4138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43" Namespace="calico-system" Pod="calico-apiserver-c65dc69c4-dhsll" WorkloadEndpoint="localhost-k8s-calico--apiserver--c65dc69c4--dhsll-eth0" Apr 21 10:48:31.703649 containerd[1480]: time="2026-04-21T10:48:31.703582582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zxzb7,Uid:a09f8f7d-21a8-4bda-84fc-452be1c07e7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b\"" Apr 21 10:48:31.705430 kubelet[2526]: E0421 10:48:31.705181 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:31.712386 containerd[1480]: time="2026-04-21T10:48:31.712358716Z" level=info msg="CreateContainer within sandbox \"d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:48:31.717318 containerd[1480]: time="2026-04-21T10:48:31.717239240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:48:31.717404 containerd[1480]: time="2026-04-21T10:48:31.717310252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:48:31.717404 containerd[1480]: time="2026-04-21T10:48:31.717328887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.719962 containerd[1480]: time="2026-04-21T10:48:31.719816441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.730710 containerd[1480]: time="2026-04-21T10:48:31.730681669Z" level=info msg="CreateContainer within sandbox \"d1abb307f8f52675b3e275cd0c55b0991ec8cdb59f7e8b4f32f3066a0fb1ae8b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73353cf083cc746787ff21f8ee6e06f9a27fbd9c53e4cfb5b6cfe2999417a20a\"" Apr 21 10:48:31.731998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648373789.mount: Deactivated successfully. Apr 21 10:48:31.735319 containerd[1480]: time="2026-04-21T10:48:31.735260466Z" level=info msg="StartContainer for \"73353cf083cc746787ff21f8ee6e06f9a27fbd9c53e4cfb5b6cfe2999417a20a\"" Apr 21 10:48:31.750203 systemd[1]: Started cri-containerd-bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43.scope - libcontainer container bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43. Apr 21 10:48:31.771242 systemd-networkd[1404]: cali097d89f618c: Link UP Apr 21 10:48:31.771793 systemd-networkd[1404]: cali097d89f618c: Gained carrier Apr 21 10:48:31.772285 systemd[1]: Started cri-containerd-73353cf083cc746787ff21f8ee6e06f9a27fbd9c53e4cfb5b6cfe2999417a20a.scope - libcontainer container 73353cf083cc746787ff21f8ee6e06f9a27fbd9c53e4cfb5b6cfe2999417a20a. Apr 21 10:48:31.777221 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.504 [INFO][4158] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--hdntw-eth0 goldmane-5b85766d88- calico-system dd1ae5d1-50ed-4274-8444-94f487a665ed 994 0 2026-04-21 10:47:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-hdntw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali097d89f618c [] [] }} ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Namespace="calico-system" Pod="goldmane-5b85766d88-hdntw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--hdntw-" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.504 [INFO][4158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Namespace="calico-system" Pod="goldmane-5b85766d88-hdntw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.538 [INFO][4236] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" HandleID="k8s-pod-network.f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Workload="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.551 [INFO][4236] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" HandleID="k8s-pod-network.f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Workload="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-hdntw", "timestamp":"2026-04-21 10:48:31.538087549 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001926e0)} Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.551 [INFO][4236] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.660 [INFO][4236] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.660 [INFO][4236] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.717 [INFO][4236] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" host="localhost" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.729 [INFO][4236] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.746 [INFO][4236] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.748 [INFO][4236] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.750 [INFO][4236] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.751 [INFO][4236] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" host="localhost" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.752 [INFO][4236] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0 Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.758 [INFO][4236] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" host="localhost" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.766 [INFO][4236] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" host="localhost" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.766 [INFO][4236] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" host="localhost" Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.766 [INFO][4236] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:31.786235 containerd[1480]: 2026-04-21 10:48:31.766 [INFO][4236] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" HandleID="k8s-pod-network.f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Workload="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" Apr 21 10:48:31.786683 containerd[1480]: 2026-04-21 10:48:31.769 [INFO][4158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Namespace="calico-system" Pod="goldmane-5b85766d88-hdntw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--hdntw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"dd1ae5d1-50ed-4274-8444-94f487a665ed", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-hdntw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali097d89f618c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.786683 containerd[1480]: 2026-04-21 10:48:31.769 [INFO][4158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Namespace="calico-system" Pod="goldmane-5b85766d88-hdntw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" Apr 21 10:48:31.786683 containerd[1480]: 2026-04-21 10:48:31.769 [INFO][4158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali097d89f618c ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Namespace="calico-system" Pod="goldmane-5b85766d88-hdntw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" Apr 21 10:48:31.786683 containerd[1480]: 2026-04-21 10:48:31.772 [INFO][4158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Namespace="calico-system" Pod="goldmane-5b85766d88-hdntw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" Apr 21 10:48:31.786683 containerd[1480]: 2026-04-21 10:48:31.772 [INFO][4158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Namespace="calico-system" Pod="goldmane-5b85766d88-hdntw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--hdntw-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"dd1ae5d1-50ed-4274-8444-94f487a665ed", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0", Pod:"goldmane-5b85766d88-hdntw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali097d89f618c", MAC:"d2:30:8b:4c:fc:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.786683 containerd[1480]: 2026-04-21 10:48:31.782 [INFO][4158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0" Namespace="calico-system" Pod="goldmane-5b85766d88-hdntw" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--hdntw-eth0" Apr 21 10:48:31.808469 containerd[1480]: time="2026-04-21T10:48:31.808030696Z" level=info msg="StartContainer for \"73353cf083cc746787ff21f8ee6e06f9a27fbd9c53e4cfb5b6cfe2999417a20a\" returns successfully" Apr 21 10:48:31.826665 containerd[1480]: time="2026-04-21T10:48:31.821311524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:48:31.826665 containerd[1480]: time="2026-04-21T10:48:31.821363099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:48:31.826665 containerd[1480]: time="2026-04-21T10:48:31.821380109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.826665 containerd[1480]: time="2026-04-21T10:48:31.821463516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.831448 containerd[1480]: time="2026-04-21T10:48:31.831404218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c65dc69c4-dhsll,Uid:90cacd3d-2a63-40bf-a72f-67fa5f649e88,Namespace:calico-system,Attempt:0,} returns sandbox id \"bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43\"" Apr 21 10:48:31.857083 systemd[1]: Started cri-containerd-f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0.scope - libcontainer container f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0. Apr 21 10:48:31.881039 systemd-networkd[1404]: calia3a271c624f: Link UP Apr 21 10:48:31.881889 systemd-networkd[1404]: calia3a271c624f: Gained carrier Apr 21 10:48:31.885725 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:48:31.891613 systemd-networkd[1404]: vxlan.calico: Link UP Apr 21 10:48:31.891618 systemd-networkd[1404]: vxlan.calico: Gained carrier Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.492 [INFO][4166] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0 coredns-674b8bbfcf- kube-system 8a4497d2-deff-4a0e-bc07-c304fe7a4c6e 989 0 2026-04-21 10:47:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-kp9lg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia3a271c624f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-kp9lg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kp9lg-" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.492 [INFO][4166] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-kp9lg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.560 [INFO][4229] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" HandleID="k8s-pod-network.027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Workload="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.568 [INFO][4229] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" HandleID="k8s-pod-network.027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Workload="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004089c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-kp9lg", "timestamp":"2026-04-21 10:48:31.560030838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005d4dc0)} Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.568 [INFO][4229] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.766 [INFO][4229] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.766 [INFO][4229] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.816 [INFO][4229] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" host="localhost" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.836 [INFO][4229] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.844 [INFO][4229] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.849 [INFO][4229] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.852 [INFO][4229] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.852 [INFO][4229] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" host="localhost" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.854 [INFO][4229] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.857 [INFO][4229] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" host="localhost" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.867 [INFO][4229] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" host="localhost" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.867 [INFO][4229] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" host="localhost" Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.867 [INFO][4229] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:31.901210 containerd[1480]: 2026-04-21 10:48:31.867 [INFO][4229] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" HandleID="k8s-pod-network.027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Workload="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" Apr 21 10:48:31.901633 containerd[1480]: 2026-04-21 10:48:31.870 [INFO][4166] cni-plugin/k8s.go 418: Populated endpoint ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-kp9lg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a4497d2-deff-4a0e-bc07-c304fe7a4c6e", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-kp9lg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3a271c624f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.901633 containerd[1480]: 2026-04-21 10:48:31.871 [INFO][4166] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-kp9lg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" Apr 21 10:48:31.901633 containerd[1480]: 2026-04-21 10:48:31.871 [INFO][4166] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3a271c624f ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-kp9lg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" Apr 21 10:48:31.901633 containerd[1480]: 2026-04-21 10:48:31.880 [INFO][4166] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-kp9lg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" Apr 21 10:48:31.901633 containerd[1480]: 2026-04-21 10:48:31.881 [INFO][4166] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-kp9lg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a4497d2-deff-4a0e-bc07-c304fe7a4c6e", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 47, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a", Pod:"coredns-674b8bbfcf-kp9lg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3a271c624f", MAC:"96:12:f1:92:dc:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:48:31.901633 containerd[1480]: 2026-04-21 10:48:31.896 [INFO][4166] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-kp9lg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kp9lg-eth0" Apr 21 10:48:31.942899 containerd[1480]: time="2026-04-21T10:48:31.942576735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:48:31.943076 containerd[1480]: time="2026-04-21T10:48:31.943017580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:48:31.943076 containerd[1480]: time="2026-04-21T10:48:31.943046091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.943151 containerd[1480]: time="2026-04-21T10:48:31.943104105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:48:31.947987 containerd[1480]: time="2026-04-21T10:48:31.947954426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hdntw,Uid:dd1ae5d1-50ed-4274-8444-94f487a665ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0\"" Apr 21 10:48:31.971044 systemd[1]: Started cri-containerd-027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a.scope - libcontainer container 027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a. Apr 21 10:48:31.981170 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 10:48:32.008766 containerd[1480]: time="2026-04-21T10:48:32.008716978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kp9lg,Uid:8a4497d2-deff-4a0e-bc07-c304fe7a4c6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a\"" Apr 21 10:48:32.009538 kubelet[2526]: E0421 10:48:32.009413 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:32.015717 containerd[1480]: time="2026-04-21T10:48:32.015684019Z" level=info msg="CreateContainer within sandbox \"027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:48:32.033184 containerd[1480]: time="2026-04-21T10:48:32.033091644Z" level=info msg="CreateContainer within sandbox \"027bbe37c09eac0d08823c9f0dac6268db3e13863975296845a445f3cd684a2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4bc0c3bc2d0236e28e2fd5649134e4e0ff9e8cb78c95dad1c880c38188aee15f\"" Apr 21 10:48:32.033903 containerd[1480]: time="2026-04-21T10:48:32.033884101Z" level=info msg="StartContainer for \"4bc0c3bc2d0236e28e2fd5649134e4e0ff9e8cb78c95dad1c880c38188aee15f\"" Apr 21 10:48:32.040007 systemd-networkd[1404]: cali37cda7847d6: Gained IPv6LL Apr 21 10:48:32.061062 systemd[1]: Started cri-containerd-4bc0c3bc2d0236e28e2fd5649134e4e0ff9e8cb78c95dad1c880c38188aee15f.scope - libcontainer container 4bc0c3bc2d0236e28e2fd5649134e4e0ff9e8cb78c95dad1c880c38188aee15f. Apr 21 10:48:32.087700 containerd[1480]: time="2026-04-21T10:48:32.087654731Z" level=info msg="StartContainer for \"4bc0c3bc2d0236e28e2fd5649134e4e0ff9e8cb78c95dad1c880c38188aee15f\" returns successfully" Apr 21 10:48:32.107283 systemd-networkd[1404]: caliaeb1eaee7a1: Gained IPv6LL Apr 21 10:48:32.311610 kubelet[2526]: E0421 10:48:32.311282 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:32.316029 kubelet[2526]: E0421 10:48:32.315948 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:32.333598 kubelet[2526]: I0421 10:48:32.333275 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zxzb7" podStartSLOduration=49.333255109 podStartE2EDuration="49.333255109s" podCreationTimestamp="2026-04-21 10:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:48:32.322060786 +0000 UTC m=+54.248358685" watchObservedRunningTime="2026-04-21 10:48:32.333255109 +0000 UTC m=+54.259553005" Apr 21 10:48:32.343619 kubelet[2526]: I0421 10:48:32.343490 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kp9lg" podStartSLOduration=49.34347054 podStartE2EDuration="49.34347054s" podCreationTimestamp="2026-04-21 10:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:48:32.342611391 +0000 UTC m=+54.268909276" watchObservedRunningTime="2026-04-21 10:48:32.34347054 +0000 UTC m=+54.269768466" Apr 21 10:48:32.424830 systemd[1]: Started sshd@10-10.0.0.157:22-10.0.0.1:55534.service - OpenSSH per-connection server daemon (10.0.0.1:55534). Apr 21 10:48:32.467349 sshd[4616]: Accepted publickey for core from 10.0.0.1 port 55534 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:32.468793 sshd[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:32.472723 systemd-logind[1466]: New session 11 of user core. Apr 21 10:48:32.485003 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:48:32.617439 systemd-networkd[1404]: cali06cecbb44f0: Gained IPv6LL Apr 21 10:48:32.664089 sshd[4616]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:32.667398 systemd[1]: sshd@10-10.0.0.157:22-10.0.0.1:55534.service: Deactivated successfully. Apr 21 10:48:32.669074 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:48:32.669660 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:48:32.670717 systemd-logind[1466]: Removed session 11. Apr 21 10:48:32.808224 systemd-networkd[1404]: calie4022b043e1: Gained IPv6LL Apr 21 10:48:33.128085 systemd-networkd[1404]: vxlan.calico: Gained IPv6LL Apr 21 10:48:33.128931 systemd-networkd[1404]: cali9a04f1ab1df: Gained IPv6LL Apr 21 10:48:33.318070 kubelet[2526]: E0421 10:48:33.318017 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:33.318704 kubelet[2526]: E0421 10:48:33.318111 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:33.384084 systemd-networkd[1404]: cali097d89f618c: Gained IPv6LL Apr 21 10:48:33.705210 systemd-networkd[1404]: calif5198ef5700: Gained IPv6LL Apr 21 10:48:33.896145 systemd-networkd[1404]: calia3a271c624f: Gained IPv6LL Apr 21 10:48:34.319798 kubelet[2526]: E0421 10:48:34.319740 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:34.320212 kubelet[2526]: E0421 10:48:34.320098 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:34.463837 containerd[1480]: time="2026-04-21T10:48:34.463771510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:34.464525 containerd[1480]: time="2026-04-21T10:48:34.464473260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:48:34.465658 containerd[1480]: time="2026-04-21T10:48:34.465627884Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:34.467743 containerd[1480]: time="2026-04-21T10:48:34.467714949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:34.468312 containerd[1480]: time="2026-04-21T10:48:34.468279436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.356234926s" Apr 21 10:48:34.468359 containerd[1480]: time="2026-04-21T10:48:34.468311774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:48:34.469248 containerd[1480]: time="2026-04-21T10:48:34.469224360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:48:34.481105 containerd[1480]: time="2026-04-21T10:48:34.481074253Z" level=info msg="CreateContainer within sandbox \"8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:48:34.493595 containerd[1480]: time="2026-04-21T10:48:34.493534183Z" level=info msg="CreateContainer within sandbox \"8bb7324e3367d30de1d6cd829b80235b9aee32ea162475e320404ae8eca5a103\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5129145fd85ad62e22c857c921c228d3ecaefa487344ca79afd101a0c7746330\"" Apr 21 10:48:34.495080 containerd[1480]: time="2026-04-21T10:48:34.494193160Z" level=info msg="StartContainer for \"5129145fd85ad62e22c857c921c228d3ecaefa487344ca79afd101a0c7746330\"" Apr 21 10:48:34.523033 systemd[1]: Started cri-containerd-5129145fd85ad62e22c857c921c228d3ecaefa487344ca79afd101a0c7746330.scope - libcontainer container 5129145fd85ad62e22c857c921c228d3ecaefa487344ca79afd101a0c7746330. Apr 21 10:48:34.554445 containerd[1480]: time="2026-04-21T10:48:34.554404277Z" level=info msg="StartContainer for \"5129145fd85ad62e22c857c921c228d3ecaefa487344ca79afd101a0c7746330\" returns successfully" Apr 21 10:48:35.334320 kubelet[2526]: I0421 10:48:35.334219 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8d98f594c-fjmst" podStartSLOduration=37.976376419 podStartE2EDuration="41.334207002s" podCreationTimestamp="2026-04-21 10:47:54 +0000 UTC" firstStartedPulling="2026-04-21 10:48:31.111216063 +0000 UTC m=+53.037513947" lastFinishedPulling="2026-04-21 10:48:34.469046639 +0000 UTC m=+56.395344530" observedRunningTime="2026-04-21 10:48:35.33393723 +0000 UTC m=+57.260235122" watchObservedRunningTime="2026-04-21 10:48:35.334207002 +0000 UTC m=+57.260504897" Apr 21 10:48:37.679564 systemd[1]: Started sshd@11-10.0.0.157:22-10.0.0.1:39394.service - OpenSSH per-connection server daemon (10.0.0.1:39394). Apr 21 10:48:37.692243 containerd[1480]: time="2026-04-21T10:48:37.692084958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:37.694896 containerd[1480]: time="2026-04-21T10:48:37.692984201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:48:37.694896 containerd[1480]: time="2026-04-21T10:48:37.693951365Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:37.696726 containerd[1480]: time="2026-04-21T10:48:37.696677878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:37.698351 containerd[1480]: time="2026-04-21T10:48:37.698288520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.229031597s" Apr 21 10:48:37.698351 containerd[1480]: time="2026-04-21T10:48:37.698329963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:48:37.700789 containerd[1480]: time="2026-04-21T10:48:37.700751195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:48:37.704637 containerd[1480]: time="2026-04-21T10:48:37.704576793Z" level=info msg="CreateContainer within sandbox \"deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:48:37.718163 containerd[1480]: time="2026-04-21T10:48:37.718083465Z" level=info msg="CreateContainer within sandbox \"deeecf8fa4b0df5105f387704f13b772a1ce5714e17d15f68c08842415c4b454\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de770472cf5ae6509ed066ae0aade43785892effba58a4b7a4e0a09ff124b785\"" Apr 21 10:48:37.718881 containerd[1480]: time="2026-04-21T10:48:37.718690980Z" level=info msg="StartContainer for \"de770472cf5ae6509ed066ae0aade43785892effba58a4b7a4e0a09ff124b785\"" Apr 21 10:48:37.734626 sshd[4764]: Accepted publickey for core from 10.0.0.1 port 39394 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:37.735667 sshd[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:37.739958 systemd-logind[1466]: New session 12 of user core. Apr 21 10:48:37.744046 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:48:37.757020 systemd[1]: Started cri-containerd-de770472cf5ae6509ed066ae0aade43785892effba58a4b7a4e0a09ff124b785.scope - libcontainer container de770472cf5ae6509ed066ae0aade43785892effba58a4b7a4e0a09ff124b785. Apr 21 10:48:37.812939 containerd[1480]: time="2026-04-21T10:48:37.812820568Z" level=info msg="StartContainer for \"de770472cf5ae6509ed066ae0aade43785892effba58a4b7a4e0a09ff124b785\" returns successfully" Apr 21 10:48:37.979751 sshd[4764]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:37.983190 systemd[1]: sshd@11-10.0.0.157:22-10.0.0.1:39394.service: Deactivated successfully. Apr 21 10:48:37.985126 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:48:37.985734 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:48:37.986714 systemd-logind[1466]: Removed session 12. Apr 21 10:48:39.351571 kubelet[2526]: I0421 10:48:39.350491 2526 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:48:39.758670 containerd[1480]: time="2026-04-21T10:48:39.758586314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:39.759504 containerd[1480]: time="2026-04-21T10:48:39.759443468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:48:39.767262 containerd[1480]: time="2026-04-21T10:48:39.767154216Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:39.771939 containerd[1480]: time="2026-04-21T10:48:39.771885736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:39.773136 containerd[1480]: time="2026-04-21T10:48:39.773056383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.07226953s" Apr 21 10:48:39.773136 containerd[1480]: time="2026-04-21T10:48:39.773104176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:48:39.776486 containerd[1480]: time="2026-04-21T10:48:39.776441029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:48:39.783571 containerd[1480]: time="2026-04-21T10:48:39.783500704Z" level=info msg="CreateContainer within sandbox \"8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:48:39.845961 containerd[1480]: time="2026-04-21T10:48:39.845573261Z" level=info msg="CreateContainer within sandbox \"8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\"" Apr 21 10:48:39.847940 containerd[1480]: time="2026-04-21T10:48:39.847833695Z" level=info msg="StartContainer for \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\"" Apr 21 10:48:39.913452 systemd[1]: Started cri-containerd-99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8.scope - libcontainer container 99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8. Apr 21 10:48:40.034219 containerd[1480]: time="2026-04-21T10:48:40.033760828Z" level=info msg="StartContainer for \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\" returns successfully" Apr 21 10:48:41.756036 containerd[1480]: time="2026-04-21T10:48:41.755973264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:41.757340 containerd[1480]: time="2026-04-21T10:48:41.757243189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:48:41.758828 containerd[1480]: time="2026-04-21T10:48:41.758758918Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:41.763543 containerd[1480]: time="2026-04-21T10:48:41.763443500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:41.772182 containerd[1480]: time="2026-04-21T10:48:41.772086908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.995592396s" Apr 21 10:48:41.772182 containerd[1480]: time="2026-04-21T10:48:41.772166554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:48:41.773503 containerd[1480]: time="2026-04-21T10:48:41.773431335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:48:41.779820 containerd[1480]: time="2026-04-21T10:48:41.778990648Z" level=info msg="CreateContainer within sandbox \"6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:48:41.830440 containerd[1480]: time="2026-04-21T10:48:41.830363631Z" level=info msg="CreateContainer within sandbox \"6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f4952095521831c8060e1e62f7988e8c658be2695256dcdfb91e627b7f4f11e3\"" Apr 21 10:48:41.831963 containerd[1480]: time="2026-04-21T10:48:41.831611352Z" level=info msg="StartContainer for \"f4952095521831c8060e1e62f7988e8c658be2695256dcdfb91e627b7f4f11e3\"" Apr 21 10:48:41.928069 systemd[1]: Started cri-containerd-f4952095521831c8060e1e62f7988e8c658be2695256dcdfb91e627b7f4f11e3.scope - libcontainer container f4952095521831c8060e1e62f7988e8c658be2695256dcdfb91e627b7f4f11e3. Apr 21 10:48:41.956832 containerd[1480]: time="2026-04-21T10:48:41.956785550Z" level=info msg="StartContainer for \"f4952095521831c8060e1e62f7988e8c658be2695256dcdfb91e627b7f4f11e3\" returns successfully" Apr 21 10:48:42.223187 containerd[1480]: time="2026-04-21T10:48:42.223051281Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:42.227341 containerd[1480]: time="2026-04-21T10:48:42.227242772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:48:42.234031 containerd[1480]: time="2026-04-21T10:48:42.233941618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 460.39499ms" Apr 21 10:48:42.234031 containerd[1480]: time="2026-04-21T10:48:42.234021234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:48:42.236091 containerd[1480]: time="2026-04-21T10:48:42.236005405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:48:42.299343 containerd[1480]: time="2026-04-21T10:48:42.298370546Z" level=info msg="CreateContainer within sandbox \"bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:48:42.328934 containerd[1480]: time="2026-04-21T10:48:42.328590380Z" level=info msg="CreateContainer within sandbox \"bea10fcd415fc62c39d3c17b5c4e851db5f5e5926854915d0b766251f2a15c43\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"69d2768a7b126a11aedf6f972625e27b5327669a489318ac0442d3ba3c23c82c\"" Apr 21 10:48:42.333106 containerd[1480]: time="2026-04-21T10:48:42.332941927Z" level=info msg="StartContainer for \"69d2768a7b126a11aedf6f972625e27b5327669a489318ac0442d3ba3c23c82c\"" Apr 21 10:48:42.397441 systemd[1]: Started cri-containerd-69d2768a7b126a11aedf6f972625e27b5327669a489318ac0442d3ba3c23c82c.scope - libcontainer container 69d2768a7b126a11aedf6f972625e27b5327669a489318ac0442d3ba3c23c82c. Apr 21 10:48:42.469785 containerd[1480]: time="2026-04-21T10:48:42.469652838Z" level=info msg="StartContainer for \"69d2768a7b126a11aedf6f972625e27b5327669a489318ac0442d3ba3c23c82c\" returns successfully" Apr 21 10:48:43.003070 systemd[1]: Started sshd@12-10.0.0.157:22-10.0.0.1:39396.service - OpenSSH per-connection server daemon (10.0.0.1:39396). Apr 21 10:48:43.158928 sshd[4964]: Accepted publickey for core from 10.0.0.1 port 39396 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:43.161997 sshd[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:43.170645 systemd-logind[1466]: New session 13 of user core. Apr 21 10:48:43.178512 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:48:43.469112 kubelet[2526]: I0421 10:48:43.469027 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-c65dc69c4-h9lww" podStartSLOduration=42.932494467 podStartE2EDuration="49.46899572s" podCreationTimestamp="2026-04-21 10:47:54 +0000 UTC" firstStartedPulling="2026-04-21 10:48:31.162871755 +0000 UTC m=+53.089169639" lastFinishedPulling="2026-04-21 10:48:37.699373 +0000 UTC m=+59.625670892" observedRunningTime="2026-04-21 10:48:38.389080635 +0000 UTC m=+60.315378528" watchObservedRunningTime="2026-04-21 10:48:43.46899572 +0000 UTC m=+65.395293618" Apr 21 10:48:43.701603 sshd[4964]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:43.709418 systemd[1]: sshd@12-10.0.0.157:22-10.0.0.1:39396.service: Deactivated successfully. Apr 21 10:48:43.719720 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:48:43.723147 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:48:43.727732 systemd-logind[1466]: Removed session 13. Apr 21 10:48:43.791763 kernel: hrtimer: interrupt took 5255816 ns Apr 21 10:48:44.438185 kubelet[2526]: I0421 10:48:44.438102 2526 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:48:44.899470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811176260.mount: Deactivated successfully. Apr 21 10:48:45.220894 containerd[1480]: time="2026-04-21T10:48:45.219135781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:45.221419 containerd[1480]: time="2026-04-21T10:48:45.221383367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:48:45.223053 containerd[1480]: time="2026-04-21T10:48:45.223022405Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:45.235018 containerd[1480]: time="2026-04-21T10:48:45.234944289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:45.237875 containerd[1480]: time="2026-04-21T10:48:45.236110223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.000061869s" Apr 21 10:48:45.237875 containerd[1480]: time="2026-04-21T10:48:45.236137628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:48:45.243390 containerd[1480]: time="2026-04-21T10:48:45.240807212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:48:45.246418 containerd[1480]: time="2026-04-21T10:48:45.246271262Z" level=info msg="CreateContainer within sandbox \"f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:48:45.271825 containerd[1480]: time="2026-04-21T10:48:45.271751952Z" level=info msg="CreateContainer within sandbox \"f980eac74f002f1a8d86aa51815d57c4039a35d55696705caf7b73fc17c60cc0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8534f639a2dffc39a40b4c0e0c1a62922394806e4dd767237bc0d28d1bd69da8\"" Apr 21 10:48:45.272599 containerd[1480]: time="2026-04-21T10:48:45.272406372Z" level=info msg="StartContainer for \"8534f639a2dffc39a40b4c0e0c1a62922394806e4dd767237bc0d28d1bd69da8\"" Apr 21 10:48:45.324507 systemd[1]: Started cri-containerd-8534f639a2dffc39a40b4c0e0c1a62922394806e4dd767237bc0d28d1bd69da8.scope - libcontainer container 8534f639a2dffc39a40b4c0e0c1a62922394806e4dd767237bc0d28d1bd69da8. Apr 21 10:48:45.371429 containerd[1480]: time="2026-04-21T10:48:45.371341765Z" level=info msg="StartContainer for \"8534f639a2dffc39a40b4c0e0c1a62922394806e4dd767237bc0d28d1bd69da8\" returns successfully" Apr 21 10:48:45.462726 kubelet[2526]: I0421 10:48:45.462268 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-hdntw" podStartSLOduration=38.170621795 podStartE2EDuration="51.462114811s" podCreationTimestamp="2026-04-21 10:47:54 +0000 UTC" firstStartedPulling="2026-04-21 10:48:31.949099706 +0000 UTC m=+53.875397591" lastFinishedPulling="2026-04-21 10:48:45.24059272 +0000 UTC m=+67.166890607" observedRunningTime="2026-04-21 10:48:45.461691983 +0000 UTC m=+67.387989870" watchObservedRunningTime="2026-04-21 10:48:45.462114811 +0000 UTC m=+67.388412717" Apr 21 10:48:45.462726 kubelet[2526]: I0421 10:48:45.462586 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-c65dc69c4-dhsll" podStartSLOduration=41.06290572 podStartE2EDuration="51.462574595s" podCreationTimestamp="2026-04-21 10:47:54 +0000 UTC" firstStartedPulling="2026-04-21 10:48:31.835818475 +0000 UTC m=+53.762116359" lastFinishedPulling="2026-04-21 10:48:42.235487344 +0000 UTC m=+64.161785234" observedRunningTime="2026-04-21 10:48:43.480785593 +0000 UTC m=+65.407083481" watchObservedRunningTime="2026-04-21 10:48:45.462574595 +0000 UTC m=+67.388872500" Apr 21 10:48:46.723438 kubelet[2526]: I0421 10:48:46.723356 2526 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:48:48.714262 systemd[1]: Started sshd@13-10.0.0.157:22-10.0.0.1:48106.service - OpenSSH per-connection server daemon (10.0.0.1:48106). Apr 21 10:48:48.774512 sshd[5111]: Accepted publickey for core from 10.0.0.1 port 48106 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:48.776428 sshd[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:48.782316 systemd-logind[1466]: New session 14 of user core. Apr 21 10:48:48.788106 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:48:48.920930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617016839.mount: Deactivated successfully. Apr 21 10:48:48.981614 containerd[1480]: time="2026-04-21T10:48:48.981471703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:48.982766 containerd[1480]: time="2026-04-21T10:48:48.982726443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:48:48.983831 containerd[1480]: time="2026-04-21T10:48:48.983575282Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:48.996575 containerd[1480]: time="2026-04-21T10:48:48.996504239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:49.000449 containerd[1480]: time="2026-04-21T10:48:49.000338959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.756726901s" Apr 21 10:48:49.000542 containerd[1480]: time="2026-04-21T10:48:49.000473066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:48:49.002707 containerd[1480]: time="2026-04-21T10:48:49.002653736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:48:49.010919 containerd[1480]: time="2026-04-21T10:48:49.010883402Z" level=info msg="CreateContainer within sandbox \"8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:48:49.029297 containerd[1480]: time="2026-04-21T10:48:49.029235929Z" level=info msg="CreateContainer within sandbox \"8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\"" Apr 21 10:48:49.030977 containerd[1480]: time="2026-04-21T10:48:49.030384283Z" level=info msg="StartContainer for \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\"" Apr 21 10:48:49.097062 sshd[5111]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:49.097078 systemd[1]: Started cri-containerd-c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202.scope - libcontainer container c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202. Apr 21 10:48:49.101372 systemd[1]: Started sshd@14-10.0.0.157:22-10.0.0.1:48108.service - OpenSSH per-connection server daemon (10.0.0.1:48108). Apr 21 10:48:49.107064 systemd[1]: sshd@13-10.0.0.157:22-10.0.0.1:48106.service: Deactivated successfully. Apr 21 10:48:49.109380 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:48:49.110721 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:48:49.111566 systemd-logind[1466]: Removed session 14. Apr 21 10:48:49.135757 sshd[5148]: Accepted publickey for core from 10.0.0.1 port 48108 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:49.143066 sshd[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:49.150898 systemd-logind[1466]: New session 15 of user core. Apr 21 10:48:49.155200 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:48:49.166207 containerd[1480]: time="2026-04-21T10:48:49.166159194Z" level=info msg="StartContainer for \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\" returns successfully" Apr 21 10:48:49.338222 sshd[5148]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:49.350320 systemd[1]: sshd@14-10.0.0.157:22-10.0.0.1:48108.service: Deactivated successfully. Apr 21 10:48:49.354922 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:48:49.357336 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:48:49.366670 systemd[1]: Started sshd@15-10.0.0.157:22-10.0.0.1:48118.service - OpenSSH per-connection server daemon (10.0.0.1:48118). Apr 21 10:48:49.369813 systemd-logind[1466]: Removed session 15. Apr 21 10:48:49.410810 sshd[5181]: Accepted publickey for core from 10.0.0.1 port 48118 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:49.411316 sshd[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:49.415614 systemd-logind[1466]: New session 16 of user core. Apr 21 10:48:49.426144 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:48:49.501245 kubelet[2526]: I0421 10:48:49.499543 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7dc6cf9bc5-gw522" podStartSLOduration=19.779925925 podStartE2EDuration="37.483954762s" podCreationTimestamp="2026-04-21 10:48:12 +0000 UTC" firstStartedPulling="2026-04-21 10:48:31.298259243 +0000 UTC m=+53.224557127" lastFinishedPulling="2026-04-21 10:48:49.002288081 +0000 UTC m=+70.928585964" observedRunningTime="2026-04-21 10:48:49.48276305 +0000 UTC m=+71.409060938" watchObservedRunningTime="2026-04-21 10:48:49.483954762 +0000 UTC m=+71.410252654" Apr 21 10:48:49.596002 sshd[5181]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:49.599893 systemd[1]: sshd@15-10.0.0.157:22-10.0.0.1:48118.service: Deactivated successfully. Apr 21 10:48:49.602478 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:48:49.605327 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:48:49.607615 systemd-logind[1466]: Removed session 16. Apr 21 10:48:49.612196 containerd[1480]: time="2026-04-21T10:48:49.611917617Z" level=info msg="StopContainer for \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\" with timeout 30 (s)" Apr 21 10:48:49.612196 containerd[1480]: time="2026-04-21T10:48:49.612002600Z" level=info msg="StopContainer for \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\" with timeout 30 (s)" Apr 21 10:48:49.613018 containerd[1480]: time="2026-04-21T10:48:49.612933317Z" level=info msg="Stop container \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\" with signal terminated" Apr 21 10:48:49.613542 containerd[1480]: time="2026-04-21T10:48:49.613512633Z" level=info msg="Stop container \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\" with signal terminated" Apr 21 10:48:49.625484 systemd[1]: cri-containerd-c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202.scope: Deactivated successfully. Apr 21 10:48:49.639417 systemd[1]: cri-containerd-99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8.scope: Deactivated successfully. Apr 21 10:48:49.685690 containerd[1480]: time="2026-04-21T10:48:49.671806604Z" level=info msg="shim disconnected" id=99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8 namespace=k8s.io Apr 21 10:48:49.687580 containerd[1480]: time="2026-04-21T10:48:49.685695783Z" level=warning msg="cleaning up after shim disconnected" id=99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8 namespace=k8s.io Apr 21 10:48:49.687580 containerd[1480]: time="2026-04-21T10:48:49.685733238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:48:49.703747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202-rootfs.mount: Deactivated successfully. Apr 21 10:48:49.704694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8-rootfs.mount: Deactivated successfully. Apr 21 10:48:49.715447 containerd[1480]: time="2026-04-21T10:48:49.715356573Z" level=info msg="shim disconnected" id=c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202 namespace=k8s.io Apr 21 10:48:49.715447 containerd[1480]: time="2026-04-21T10:48:49.715429321Z" level=warning msg="cleaning up after shim disconnected" id=c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202 namespace=k8s.io Apr 21 10:48:49.715447 containerd[1480]: time="2026-04-21T10:48:49.715438827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:48:49.718783 containerd[1480]: time="2026-04-21T10:48:49.718596061Z" level=info msg="StopContainer for \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\" returns successfully" Apr 21 10:48:49.734953 containerd[1480]: time="2026-04-21T10:48:49.734888122Z" level=info msg="StopContainer for \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\" returns successfully" Apr 21 10:48:49.738279 containerd[1480]: time="2026-04-21T10:48:49.738221664Z" level=info msg="StopPodSandbox for \"8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9\"" Apr 21 10:48:49.738440 containerd[1480]: time="2026-04-21T10:48:49.738292878Z" level=info msg="Container to stop \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:48:49.738440 containerd[1480]: time="2026-04-21T10:48:49.738302910Z" level=info msg="Container to stop \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:48:49.741736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9-shm.mount: Deactivated successfully. Apr 21 10:48:49.785486 systemd[1]: cri-containerd-8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9.scope: Deactivated successfully. Apr 21 10:48:49.805481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9-rootfs.mount: Deactivated successfully. Apr 21 10:48:49.805980 containerd[1480]: time="2026-04-21T10:48:49.805810238Z" level=info msg="shim disconnected" id=8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9 namespace=k8s.io Apr 21 10:48:49.805980 containerd[1480]: time="2026-04-21T10:48:49.805880116Z" level=warning msg="cleaning up after shim disconnected" id=8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9 namespace=k8s.io Apr 21 10:48:49.805980 containerd[1480]: time="2026-04-21T10:48:49.805886869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:48:49.872962 systemd-networkd[1404]: cali06cecbb44f0: Link DOWN Apr 21 10:48:49.872970 systemd-networkd[1404]: cali06cecbb44f0: Lost carrier Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.871 [INFO][5302] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.871 [INFO][5302] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" iface="eth0" netns="/var/run/netns/cni-9d343fe4-b2ea-4bad-62e0-09df32af6070" Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.872 [INFO][5302] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" iface="eth0" netns="/var/run/netns/cni-9d343fe4-b2ea-4bad-62e0-09df32af6070" Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.883 [INFO][5302] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" after=11.270349ms iface="eth0" netns="/var/run/netns/cni-9d343fe4-b2ea-4bad-62e0-09df32af6070" Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.883 [INFO][5302] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.883 [INFO][5302] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.926 [INFO][5317] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" HandleID="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Workload="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.926 [INFO][5317] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.926 [INFO][5317] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.960 [INFO][5317] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" HandleID="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Workload="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.961 [INFO][5317] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" HandleID="k8s-pod-network.8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Workload="localhost-k8s-whisker--7dc6cf9bc5--gw522-eth0" Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.962 [INFO][5317] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:48:49.967328 containerd[1480]: 2026-04-21 10:48:49.964 [INFO][5302] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9" Apr 21 10:48:49.967730 containerd[1480]: time="2026-04-21T10:48:49.967678652Z" level=info msg="TearDown network for sandbox \"8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9\" successfully" Apr 21 10:48:49.967730 containerd[1480]: time="2026-04-21T10:48:49.967708432Z" level=info msg="StopPodSandbox for \"8d5bb3acec725ce0f96baf51cbee8f5c9991b895c30315666d1caa057f4b18e9\" returns successfully" Apr 21 10:48:49.969736 systemd[1]: run-netns-cni\x2d9d343fe4\x2db2ea\x2d4bad\x2d62e0\x2d09df32af6070.mount: Deactivated successfully. Apr 21 10:48:49.990822 kubelet[2526]: I0421 10:48:49.990765 2526 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-whisker-ca-bundle\") pod \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\" (UID: \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\") " Apr 21 10:48:49.991003 kubelet[2526]: I0421 10:48:49.990867 2526 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpgkz\" (UniqueName: \"kubernetes.io/projected/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-kube-api-access-dpgkz\") pod \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\" (UID: \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\") " Apr 21 10:48:49.991003 kubelet[2526]: I0421 10:48:49.990914 2526 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-nginx-config\") pod \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\" (UID: \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\") " Apr 21 10:48:49.991003 kubelet[2526]: I0421 10:48:49.990935 2526 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-whisker-backend-key-pair\") pod \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\" (UID: \"ccfa94ae-c2bb-42dc-bf66-d035a841d8b5\") " Apr 21 10:48:50.002473 kubelet[2526]: I0421 10:48:49.993795 2526 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ccfa94ae-c2bb-42dc-bf66-d035a841d8b5" (UID: "ccfa94ae-c2bb-42dc-bf66-d035a841d8b5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:48:50.001729 systemd[1]: var-lib-kubelet-pods-ccfa94ae\x2dc2bb\x2d42dc\x2dbf66\x2dd035a841d8b5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:48:50.002980 kubelet[2526]: I0421 10:48:49.994137 2526 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "ccfa94ae-c2bb-42dc-bf66-d035a841d8b5" (UID: "ccfa94ae-c2bb-42dc-bf66-d035a841d8b5"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:48:50.003634 kubelet[2526]: I0421 10:48:50.003606 2526 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-kube-api-access-dpgkz" (OuterVolumeSpecName: "kube-api-access-dpgkz") pod "ccfa94ae-c2bb-42dc-bf66-d035a841d8b5" (UID: "ccfa94ae-c2bb-42dc-bf66-d035a841d8b5"). InnerVolumeSpecName "kube-api-access-dpgkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:48:50.003842 kubelet[2526]: I0421 10:48:50.003814 2526 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ccfa94ae-c2bb-42dc-bf66-d035a841d8b5" (UID: "ccfa94ae-c2bb-42dc-bf66-d035a841d8b5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:48:50.091674 kubelet[2526]: I0421 10:48:50.091592 2526 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 21 10:48:50.091674 kubelet[2526]: I0421 10:48:50.091642 2526 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 21 10:48:50.091674 kubelet[2526]: I0421 10:48:50.091654 2526 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 21 10:48:50.091674 kubelet[2526]: I0421 10:48:50.091660 2526 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dpgkz\" (UniqueName: \"kubernetes.io/projected/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5-kube-api-access-dpgkz\") on node \"localhost\" DevicePath \"\"" Apr 21 10:48:50.154256 systemd[1]: Removed slice kubepods-besteffort-podccfa94ae_c2bb_42dc_bf66_d035a841d8b5.slice - libcontainer container kubepods-besteffort-podccfa94ae_c2bb_42dc_bf66_d035a841d8b5.slice. Apr 21 10:48:50.468440 kubelet[2526]: I0421 10:48:50.468303 2526 scope.go:117] "RemoveContainer" containerID="c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202" Apr 21 10:48:50.469582 containerd[1480]: time="2026-04-21T10:48:50.469550953Z" level=info msg="RemoveContainer for \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\"" Apr 21 10:48:50.482519 containerd[1480]: time="2026-04-21T10:48:50.482440284Z" level=info msg="RemoveContainer for \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\" returns successfully" Apr 21 10:48:50.483232 kubelet[2526]: I0421 10:48:50.482628 2526 scope.go:117] "RemoveContainer" containerID="99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8" Apr 21 10:48:50.485706 containerd[1480]: time="2026-04-21T10:48:50.485659085Z" level=info msg="RemoveContainer for \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\"" Apr 21 10:48:50.489526 containerd[1480]: time="2026-04-21T10:48:50.489465769Z" level=info msg="RemoveContainer for \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\" returns successfully" Apr 21 10:48:50.489822 kubelet[2526]: I0421 10:48:50.489784 2526 scope.go:117] "RemoveContainer" containerID="c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202" Apr 21 10:48:50.504659 containerd[1480]: time="2026-04-21T10:48:50.497668865Z" level=error msg="ContainerStatus for \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\": not found" Apr 21 10:48:50.504891 kubelet[2526]: E0421 10:48:50.504872 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\": not found" containerID="c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202" Apr 21 10:48:50.511698 kubelet[2526]: I0421 10:48:50.505541 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202"} err="failed to get container status \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\": not found" Apr 21 10:48:50.511698 kubelet[2526]: I0421 10:48:50.511592 2526 scope.go:117] "RemoveContainer" containerID="99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8" Apr 21 10:48:50.512136 containerd[1480]: time="2026-04-21T10:48:50.512100121Z" level=error msg="ContainerStatus for \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\": not found" Apr 21 10:48:50.512285 kubelet[2526]: E0421 10:48:50.512265 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\": not found" containerID="99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8" Apr 21 10:48:50.512342 kubelet[2526]: I0421 10:48:50.512284 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8"} err="failed to get container status \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\": not found" Apr 21 10:48:50.512342 kubelet[2526]: I0421 10:48:50.512300 2526 scope.go:117] "RemoveContainer" containerID="c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202" Apr 21 10:48:50.513334 containerd[1480]: time="2026-04-21T10:48:50.513289362Z" level=error msg="ContainerStatus for \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\": not found" Apr 21 10:48:50.513637 kubelet[2526]: I0421 10:48:50.513576 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202"} err="failed to get container status \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7bb9659c7e4e9585e99db2efb66bd36dcca3bd63666841404b5e89872372202\": not found" Apr 21 10:48:50.513637 kubelet[2526]: I0421 10:48:50.513594 2526 scope.go:117] "RemoveContainer" containerID="99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8" Apr 21 10:48:50.514000 containerd[1480]: time="2026-04-21T10:48:50.513806876Z" level=error msg="ContainerStatus for \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\": not found" Apr 21 10:48:50.514048 kubelet[2526]: I0421 10:48:50.513961 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8"} err="failed to get container status \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"99ba4693dcfd31c4bea1550acfff9dc8220e205d0b3c18f8c35d93beb44d6fb8\": not found" Apr 21 10:48:50.703488 systemd[1]: var-lib-kubelet-pods-ccfa94ae\x2dc2bb\x2d42dc\x2dbf66\x2dd035a841d8b5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddpgkz.mount: Deactivated successfully. Apr 21 10:48:50.890590 containerd[1480]: time="2026-04-21T10:48:50.890415528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:50.891456 containerd[1480]: time="2026-04-21T10:48:50.891407355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:48:50.893221 containerd[1480]: time="2026-04-21T10:48:50.893121579Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:50.900989 containerd[1480]: time="2026-04-21T10:48:50.900934360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:48:50.902320 containerd[1480]: time="2026-04-21T10:48:50.901787539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.899028689s" Apr 21 10:48:50.902320 containerd[1480]: time="2026-04-21T10:48:50.901868150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:48:50.908793 containerd[1480]: time="2026-04-21T10:48:50.908725611Z" level=info msg="CreateContainer within sandbox \"6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:48:50.923036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088492780.mount: Deactivated successfully. Apr 21 10:48:50.928175 containerd[1480]: time="2026-04-21T10:48:50.928128183Z" level=info msg="CreateContainer within sandbox \"6188433d9eef7decfb6467a79dbfe8dad9463df6a806bc9fe017ac983cf3cb92\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6dbf72b01ed506d019ddf877e932611741417580eade3af828e71f04f8c9b7de\"" Apr 21 10:48:50.929487 containerd[1480]: time="2026-04-21T10:48:50.928878589Z" level=info msg="StartContainer for \"6dbf72b01ed506d019ddf877e932611741417580eade3af828e71f04f8c9b7de\"" Apr 21 10:48:50.957487 systemd[1]: Started cri-containerd-6dbf72b01ed506d019ddf877e932611741417580eade3af828e71f04f8c9b7de.scope - libcontainer container 6dbf72b01ed506d019ddf877e932611741417580eade3af828e71f04f8c9b7de. Apr 21 10:48:50.984597 containerd[1480]: time="2026-04-21T10:48:50.984551006Z" level=info msg="StartContainer for \"6dbf72b01ed506d019ddf877e932611741417580eade3af828e71f04f8c9b7de\" returns successfully" Apr 21 10:48:51.291980 kubelet[2526]: I0421 10:48:51.291924 2526 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:48:51.293580 kubelet[2526]: I0421 10:48:51.293529 2526 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:48:52.149880 kubelet[2526]: I0421 10:48:52.149775 2526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfa94ae-c2bb-42dc-bf66-d035a841d8b5" path="/var/lib/kubelet/pods/ccfa94ae-c2bb-42dc-bf66-d035a841d8b5/volumes" Apr 21 10:48:52.152063 kubelet[2526]: E0421 10:48:52.151752 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:54.611119 systemd[1]: Started sshd@16-10.0.0.157:22-10.0.0.1:48126.service - OpenSSH per-connection server daemon (10.0.0.1:48126). Apr 21 10:48:54.680888 sshd[5402]: Accepted publickey for core from 10.0.0.1 port 48126 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:54.681092 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:54.692051 systemd-logind[1466]: New session 17 of user core. Apr 21 10:48:54.695040 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:48:54.931220 sshd[5402]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:54.938424 systemd[1]: sshd@16-10.0.0.157:22-10.0.0.1:48126.service: Deactivated successfully. Apr 21 10:48:54.940484 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:48:54.942316 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:48:54.949595 systemd[1]: Started sshd@17-10.0.0.157:22-10.0.0.1:48134.service - OpenSSH per-connection server daemon (10.0.0.1:48134). Apr 21 10:48:54.950801 systemd-logind[1466]: Removed session 17. Apr 21 10:48:54.982120 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 48134 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:54.983697 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:54.984677 kubelet[2526]: I0421 10:48:54.984346 2526 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:48:54.988945 systemd-logind[1466]: New session 18 of user core. Apr 21 10:48:54.995118 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:48:55.004583 kubelet[2526]: I0421 10:48:55.004496 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mzjww" podStartSLOduration=41.527844022 podStartE2EDuration="1m1.004471681s" podCreationTimestamp="2026-04-21 10:47:54 +0000 UTC" firstStartedPulling="2026-04-21 10:48:31.425921568 +0000 UTC m=+53.352219452" lastFinishedPulling="2026-04-21 10:48:50.902549227 +0000 UTC m=+72.828847111" observedRunningTime="2026-04-21 10:48:51.482107256 +0000 UTC m=+73.408405148" watchObservedRunningTime="2026-04-21 10:48:55.004471681 +0000 UTC m=+76.930769579" Apr 21 10:48:55.150884 kubelet[2526]: E0421 10:48:55.150778 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:48:55.225354 sshd[5425]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:55.232435 systemd[1]: sshd@17-10.0.0.157:22-10.0.0.1:48134.service: Deactivated successfully. Apr 21 10:48:55.233918 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:48:55.234980 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:48:55.256243 systemd[1]: Started sshd@18-10.0.0.157:22-10.0.0.1:39070.service - OpenSSH per-connection server daemon (10.0.0.1:39070). Apr 21 10:48:55.259437 systemd-logind[1466]: Removed session 18. Apr 21 10:48:55.334041 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 39070 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:55.335547 sshd[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:55.339956 systemd-logind[1466]: New session 19 of user core. Apr 21 10:48:55.347052 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:48:55.784154 sshd[5440]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:55.796541 systemd[1]: sshd@18-10.0.0.157:22-10.0.0.1:39070.service: Deactivated successfully. Apr 21 10:48:55.800405 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:48:55.803221 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:48:55.810575 systemd[1]: Started sshd@19-10.0.0.157:22-10.0.0.1:39084.service - OpenSSH per-connection server daemon (10.0.0.1:39084). Apr 21 10:48:55.811664 systemd-logind[1466]: Removed session 19. Apr 21 10:48:55.856217 sshd[5466]: Accepted publickey for core from 10.0.0.1 port 39084 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:55.857507 sshd[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:55.862258 systemd-logind[1466]: New session 20 of user core. Apr 21 10:48:55.866009 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:48:56.225929 sshd[5466]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:56.233379 systemd[1]: sshd@19-10.0.0.157:22-10.0.0.1:39084.service: Deactivated successfully. Apr 21 10:48:56.236302 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:48:56.237398 systemd-logind[1466]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:48:56.252587 systemd[1]: Started sshd@20-10.0.0.157:22-10.0.0.1:39094.service - OpenSSH per-connection server daemon (10.0.0.1:39094). Apr 21 10:48:56.254510 systemd-logind[1466]: Removed session 20. Apr 21 10:48:56.311125 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 39094 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:48:56.312714 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:48:56.317229 systemd-logind[1466]: New session 21 of user core. Apr 21 10:48:56.326262 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:48:56.451070 sshd[5478]: pam_unix(sshd:session): session closed for user core Apr 21 10:48:56.454050 systemd[1]: sshd@20-10.0.0.157:22-10.0.0.1:39094.service: Deactivated successfully. Apr 21 10:48:56.455531 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:48:56.456097 systemd-logind[1466]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:48:56.457439 systemd-logind[1466]: Removed session 21. Apr 21 10:49:00.151335 kubelet[2526]: E0421 10:49:00.151007 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:49:01.147490 kubelet[2526]: E0421 10:49:01.147363 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 10:49:01.462446 systemd[1]: Started sshd@21-10.0.0.157:22-10.0.0.1:39108.service - OpenSSH per-connection server daemon (10.0.0.1:39108). Apr 21 10:49:01.495707 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 39108 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:49:01.497829 sshd[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:49:01.503291 systemd-logind[1466]: New session 22 of user core. Apr 21 10:49:01.515311 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:49:01.642900 sshd[5524]: pam_unix(sshd:session): session closed for user core Apr 21 10:49:01.646210 systemd[1]: sshd@21-10.0.0.157:22-10.0.0.1:39108.service: Deactivated successfully. Apr 21 10:49:01.647773 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:49:01.648675 systemd-logind[1466]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:49:01.650212 systemd-logind[1466]: Removed session 22. Apr 21 10:49:05.337896 systemd[1]: run-containerd-runc-k8s.io-5129145fd85ad62e22c857c921c228d3ecaefa487344ca79afd101a0c7746330-runc.yY4Aya.mount: Deactivated successfully. Apr 21 10:49:06.657372 systemd[1]: Started sshd@22-10.0.0.157:22-10.0.0.1:54128.service - OpenSSH per-connection server daemon (10.0.0.1:54128). Apr 21 10:49:06.689021 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 54128 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:49:06.690146 sshd[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:49:06.693365 systemd-logind[1466]: New session 23 of user core. Apr 21 10:49:06.704138 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 10:49:06.820823 sshd[5562]: pam_unix(sshd:session): session closed for user core Apr 21 10:49:06.823735 systemd[1]: sshd@22-10.0.0.157:22-10.0.0.1:54128.service: Deactivated successfully. Apr 21 10:49:06.825041 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 10:49:06.825692 systemd-logind[1466]: Session 23 logged out. Waiting for processes to exit. Apr 21 10:49:06.826532 systemd-logind[1466]: Removed session 23. Apr 21 10:49:11.832575 systemd[1]: Started sshd@23-10.0.0.157:22-10.0.0.1:54140.service - OpenSSH per-connection server daemon (10.0.0.1:54140). Apr 21 10:49:11.881191 sshd[5583]: Accepted publickey for core from 10.0.0.1 port 54140 ssh2: RSA SHA256:bdQJDpBlmAt2heQ9++MJaeBrbb+iB/VWHm7V5OyoMfo Apr 21 10:49:11.882379 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:49:11.885929 systemd-logind[1466]: New session 24 of user core. Apr 21 10:49:11.893020 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 10:49:12.018077 sshd[5583]: pam_unix(sshd:session): session closed for user core Apr 21 10:49:12.021260 systemd[1]: sshd@23-10.0.0.157:22-10.0.0.1:54140.service: Deactivated successfully. Apr 21 10:49:12.022553 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 10:49:12.023084 systemd-logind[1466]: Session 24 logged out. Waiting for processes to exit. Apr 21 10:49:12.023900 systemd-logind[1466]: Removed session 24.