Apr 17 02:39:58.849081 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 22:00:21 -00 2026 Apr 17 02:39:58.849121 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 02:39:58.849129 kernel: BIOS-provided physical RAM map: Apr 17 02:39:58.849135 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 02:39:58.849139 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 17 02:39:58.849144 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 17 02:39:58.849149 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 17 02:39:58.849153 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 17 02:39:58.849158 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 17 02:39:58.849162 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 17 02:39:58.849166 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 17 02:39:58.849171 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 17 02:39:58.849176 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 17 02:39:58.849181 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 17 02:39:58.849186 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 17 02:39:58.849191 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 17 02:39:58.849196 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 17 02:39:58.849202 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 17 02:39:58.849206 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 17 02:39:58.849211 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 17 02:39:58.849215 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 17 02:39:58.849220 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 17 02:39:58.849224 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 17 02:39:58.849229 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 02:39:58.849234 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 17 02:39:58.849238 kernel: NX (Execute Disable) protection: active Apr 17 02:39:58.849260 kernel: APIC: Static calls initialized Apr 17 02:39:58.849265 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 17 02:39:58.849272 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 17 02:39:58.849276 kernel: extended physical RAM map: Apr 17 02:39:58.849281 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 02:39:58.849286 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 17 02:39:58.849291 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 17 02:39:58.849295 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 17 02:39:58.849300 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 17 02:39:58.849305 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 17 02:39:58.849309 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 17 02:39:58.849314 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 17 02:39:58.849319 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 17 02:39:58.849325 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 17 02:39:58.849332 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 17 02:39:58.849337 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 17 02:39:58.849342 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 17 02:39:58.849347 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 17 02:39:58.849353 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 17 02:39:58.849358 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 17 02:39:58.849363 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 17 02:39:58.849368 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 17 02:39:58.849373 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 17 02:39:58.849377 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 17 02:39:58.849382 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 17 02:39:58.849387 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 17 02:39:58.849392 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 17 02:39:58.849397 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 17 02:39:58.849402 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 02:39:58.849408 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 17 02:39:58.849413 kernel: efi: EFI v2.7 by EDK II Apr 17 02:39:58.849418 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 17 02:39:58.849423 kernel: random: crng init done Apr 17 02:39:58.849428 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 17 02:39:58.849433 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 17 02:39:58.849438 kernel: secureboot: Secure boot disabled Apr 17 02:39:58.849443 kernel: SMBIOS 2.8 present. Apr 17 02:39:58.849448 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 17 02:39:58.849453 kernel: DMI: Memory slots populated: 1/1 Apr 17 02:39:58.849458 kernel: Hypervisor detected: KVM Apr 17 02:39:58.849463 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 17 02:39:58.849469 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 02:39:58.849474 kernel: kvm-clock: using sched offset of 6036507745 cycles Apr 17 02:39:58.849480 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 02:39:58.849485 kernel: tsc: Detected 2793.438 MHz processor Apr 17 02:39:58.849490 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 02:39:58.849496 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 02:39:58.849501 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 17 02:39:58.849506 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 02:39:58.849511 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 02:39:58.849517 kernel: Using GB pages for direct mapping Apr 17 02:39:58.849522 kernel: ACPI: Early table checksum verification disabled Apr 17 02:39:58.849527 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 17 02:39:58.849533 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 17 02:39:58.849538 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:39:58.849543 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:39:58.849548 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 17 02:39:58.849553 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:39:58.849558 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:39:58.849564 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:39:58.849569 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 02:39:58.849574 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 17 02:39:58.849579 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 17 02:39:58.849584 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 17 02:39:58.849589 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 17 02:39:58.849595 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 17 02:39:58.849600 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 17 02:39:58.849605 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 17 02:39:58.849611 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 17 02:39:58.849616 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 17 02:39:58.849621 kernel: No NUMA configuration found Apr 17 02:39:58.849626 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 17 02:39:58.849642 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 17 02:39:58.849647 kernel: Zone ranges: Apr 17 02:39:58.849652 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 02:39:58.849658 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 17 02:39:58.849663 kernel: Normal empty Apr 17 02:39:58.849715 kernel: Device empty Apr 17 02:39:58.849720 kernel: Movable zone start for each node Apr 17 02:39:58.849725 kernel: Early memory node ranges Apr 17 02:39:58.849731 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 02:39:58.849736 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 17 02:39:58.849741 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 17 02:39:58.849746 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 17 02:39:58.849751 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 17 02:39:58.849756 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 17 02:39:58.849761 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 17 02:39:58.849768 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 17 02:39:58.849773 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 17 02:39:58.849778 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 02:39:58.849783 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 02:39:58.849788 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 17 02:39:58.849798 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 02:39:58.849805 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 17 02:39:58.849811 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 17 02:39:58.849817 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 17 02:39:58.849822 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 17 02:39:58.849828 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 17 02:39:58.849833 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 02:39:58.849840 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 02:39:58.849846 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 02:39:58.849852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 02:39:58.849857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 02:39:58.849863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 02:39:58.849870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 02:39:58.849876 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 02:39:58.849881 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 02:39:58.849887 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 02:39:58.849892 kernel: TSC deadline timer available Apr 17 02:39:58.849898 kernel: CPU topo: Max. logical packages: 1 Apr 17 02:39:58.849904 kernel: CPU topo: Max. logical dies: 1 Apr 17 02:39:58.849909 kernel: CPU topo: Max. dies per package: 1 Apr 17 02:39:58.849915 kernel: CPU topo: Max. threads per core: 1 Apr 17 02:39:58.849922 kernel: CPU topo: Num. cores per package: 4 Apr 17 02:39:58.849927 kernel: CPU topo: Num. threads per package: 4 Apr 17 02:39:58.849933 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 17 02:39:58.849939 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 02:39:58.849944 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 02:39:58.849950 kernel: kvm-guest: setup PV sched yield Apr 17 02:39:58.849956 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 17 02:39:58.849961 kernel: Booting paravirtualized kernel on KVM Apr 17 02:39:58.849967 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 02:39:58.849974 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 02:39:58.849980 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 17 02:39:58.849985 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 17 02:39:58.849991 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 02:39:58.849996 kernel: kvm-guest: PV spinlocks enabled Apr 17 02:39:58.850002 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 02:39:58.850008 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 02:39:58.850014 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 02:39:58.850021 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 02:39:58.850027 kernel: Fallback order for Node 0: 0 Apr 17 02:39:58.850032 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 17 02:39:58.850038 kernel: Policy zone: DMA32 Apr 17 02:39:58.850044 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 02:39:58.850049 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 02:39:58.850055 kernel: ftrace: allocating 40126 entries in 157 pages Apr 17 02:39:58.850061 kernel: ftrace: allocated 157 pages with 5 groups Apr 17 02:39:58.850066 kernel: Dynamic Preempt: voluntary Apr 17 02:39:58.850073 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 02:39:58.850079 kernel: rcu: RCU event tracing is enabled. Apr 17 02:39:58.850085 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 02:39:58.850090 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 02:39:58.850096 kernel: Rude variant of Tasks RCU enabled. Apr 17 02:39:58.850102 kernel: Tracing variant of Tasks RCU enabled. Apr 17 02:39:58.850108 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 02:39:58.850113 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 02:39:58.850119 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 02:39:58.850124 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 02:39:58.850131 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 02:39:58.850137 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 02:39:58.850143 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 02:39:58.850148 kernel: Console: colour dummy device 80x25 Apr 17 02:39:58.850154 kernel: printk: legacy console [ttyS0] enabled Apr 17 02:39:58.850159 kernel: ACPI: Core revision 20240827 Apr 17 02:39:58.850165 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 02:39:58.850171 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 02:39:58.850177 kernel: x2apic enabled Apr 17 02:39:58.850183 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 02:39:58.850189 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 02:39:58.850195 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 02:39:58.850200 kernel: kvm-guest: setup PV IPIs Apr 17 02:39:58.850206 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 02:39:58.850212 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 02:39:58.850217 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 02:39:58.850223 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 02:39:58.850229 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 02:39:58.850235 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 02:39:58.850257 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 02:39:58.850263 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 02:39:58.850269 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 02:39:58.850275 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 02:39:58.850281 kernel: RETBleed: Vulnerable Apr 17 02:39:58.850286 kernel: Speculative Store Bypass: Vulnerable Apr 17 02:39:58.850292 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 02:39:58.850299 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 02:39:58.850305 kernel: active return thunk: its_return_thunk Apr 17 02:39:58.850311 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 02:39:58.850316 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 02:39:58.850322 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 02:39:58.850328 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 02:39:58.850333 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 02:39:58.850339 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 02:39:58.850345 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 02:39:58.850352 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 02:39:58.850357 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 02:39:58.850363 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 02:39:58.850369 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 02:39:58.850374 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 02:39:58.850380 kernel: Freeing SMP alternatives memory: 32K Apr 17 02:39:58.850386 kernel: pid_max: default: 32768 minimum: 301 Apr 17 02:39:58.850391 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 17 02:39:58.850397 kernel: landlock: Up and running. Apr 17 02:39:58.850414 kernel: SELinux: Initializing. Apr 17 02:39:58.850420 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 02:39:58.850426 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 02:39:58.850432 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 02:39:58.850437 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 02:39:58.850452 kernel: signal: max sigframe size: 3632 Apr 17 02:39:58.850467 kernel: rcu: Hierarchical SRCU implementation. Apr 17 02:39:58.850473 kernel: rcu: Max phase no-delay instances is 400. Apr 17 02:39:58.850488 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 17 02:39:58.850495 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 02:39:58.850510 kernel: smp: Bringing up secondary CPUs ... Apr 17 02:39:58.850516 kernel: smpboot: x86: Booting SMP configuration: Apr 17 02:39:58.850530 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 02:39:58.850536 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 02:39:58.850542 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 02:39:58.850548 kernel: Memory: 2374696K/2565800K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46216K init, 2532K bss, 185216K reserved, 0K cma-reserved) Apr 17 02:39:58.850554 kernel: devtmpfs: initialized Apr 17 02:39:58.850560 kernel: x86/mm: Memory block size: 128MB Apr 17 02:39:58.850568 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 17 02:39:58.850573 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 17 02:39:58.850579 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 17 02:39:58.850585 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 17 02:39:58.850591 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 17 02:39:58.850597 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 17 02:39:58.850602 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 02:39:58.850608 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 02:39:58.850614 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 02:39:58.850621 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 02:39:58.850626 kernel: audit: initializing netlink subsys (disabled) Apr 17 02:39:58.850632 kernel: audit: type=2000 audit(1776393595.455:1): state=initialized audit_enabled=0 res=1 Apr 17 02:39:58.850638 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 02:39:58.850643 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 02:39:58.850649 kernel: cpuidle: using governor menu Apr 17 02:39:58.850655 kernel: efi: Freeing EFI boot services memory: 38812K Apr 17 02:39:58.850660 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 02:39:58.850666 kernel: dca service started, version 1.12.1 Apr 17 02:39:58.850701 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 17 02:39:58.850707 kernel: PCI: Using configuration type 1 for base access Apr 17 02:39:58.850713 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 02:39:58.850718 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 02:39:58.850724 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 02:39:58.850730 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 02:39:58.850735 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 02:39:58.850741 kernel: ACPI: Added _OSI(Module Device) Apr 17 02:39:58.850747 kernel: ACPI: Added _OSI(Processor Device) Apr 17 02:39:58.850753 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 02:39:58.850759 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 02:39:58.850765 kernel: ACPI: Interpreter enabled Apr 17 02:39:58.850770 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 02:39:58.850776 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 02:39:58.850782 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 02:39:58.850787 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 02:39:58.850793 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 02:39:58.850799 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 02:39:58.850901 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 02:39:58.850959 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 02:39:58.851012 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 02:39:58.851020 kernel: PCI host bridge to bus 0000:00 Apr 17 02:39:58.851078 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 02:39:58.851130 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 02:39:58.851180 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 02:39:58.851228 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 17 02:39:58.851300 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 17 02:39:58.851347 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 17 02:39:58.851394 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 02:39:58.851464 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 17 02:39:58.851526 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 17 02:39:58.851581 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 17 02:39:58.851633 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 17 02:39:58.851719 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 17 02:39:58.851773 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 02:39:58.851831 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 17 02:39:58.851913 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 17 02:39:58.851996 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 17 02:39:58.852179 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 17 02:39:58.852339 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 17 02:39:58.852399 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 17 02:39:58.852454 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 17 02:39:58.852507 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 17 02:39:58.852565 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 17 02:39:58.852623 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 17 02:39:58.852708 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 17 02:39:58.852764 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 17 02:39:58.852817 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 17 02:39:58.852872 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 17 02:39:58.852924 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 02:39:58.852979 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 17 02:39:58.853034 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 17 02:39:58.853086 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 17 02:39:58.853142 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 17 02:39:58.853194 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 17 02:39:58.853201 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 02:39:58.853207 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 02:39:58.853213 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 02:39:58.853220 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 02:39:58.853226 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 02:39:58.853232 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 02:39:58.853237 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 02:39:58.853274 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 02:39:58.853291 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 02:39:58.853297 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 02:39:58.853314 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 02:39:58.853320 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 02:39:58.853327 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 02:39:58.853333 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 02:39:58.853339 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 02:39:58.853344 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 02:39:58.853350 kernel: iommu: Default domain type: Translated Apr 17 02:39:58.853356 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 02:39:58.853361 kernel: efivars: Registered efivars operations Apr 17 02:39:58.853377 kernel: PCI: Using ACPI for IRQ routing Apr 17 02:39:58.853383 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 02:39:58.853391 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 17 02:39:58.853396 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 17 02:39:58.853402 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 17 02:39:58.853407 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 17 02:39:58.853412 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 17 02:39:58.853418 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 17 02:39:58.853423 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 17 02:39:58.853429 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 17 02:39:58.853485 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 02:39:58.853540 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 02:39:58.853593 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 02:39:58.853600 kernel: vgaarb: loaded Apr 17 02:39:58.853606 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 02:39:58.853612 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 02:39:58.853617 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 02:39:58.853623 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 02:39:58.853628 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 02:39:58.853636 kernel: pnp: PnP ACPI init Apr 17 02:39:58.853757 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 17 02:39:58.853766 kernel: pnp: PnP ACPI: found 6 devices Apr 17 02:39:58.853773 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 02:39:58.853788 kernel: NET: Registered PF_INET protocol family Apr 17 02:39:58.853796 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 02:39:58.853802 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 02:39:58.853808 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 02:39:58.853815 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 02:39:58.853821 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 02:39:58.853827 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 02:39:58.853833 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 02:39:58.853839 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 02:39:58.853844 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 02:39:58.853850 kernel: NET: Registered PF_XDP protocol family Apr 17 02:39:58.853903 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 17 02:39:58.853957 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 17 02:39:58.854008 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 02:39:58.854055 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 02:39:58.854101 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 02:39:58.854147 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 17 02:39:58.854193 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 17 02:39:58.854353 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 17 02:39:58.854362 kernel: PCI: CLS 0 bytes, default 64 Apr 17 02:39:58.854368 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 02:39:58.854377 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 02:39:58.854383 kernel: Initialise system trusted keyrings Apr 17 02:39:58.854391 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 02:39:58.854397 kernel: Key type asymmetric registered Apr 17 02:39:58.854402 kernel: Asymmetric key parser 'x509' registered Apr 17 02:39:58.854409 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 02:39:58.854415 kernel: io scheduler mq-deadline registered Apr 17 02:39:58.854421 kernel: io scheduler kyber registered Apr 17 02:39:58.854427 kernel: io scheduler bfq registered Apr 17 02:39:58.854433 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 02:39:58.854439 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 02:39:58.854445 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 02:39:58.854452 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 02:39:58.854458 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 02:39:58.854465 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 02:39:58.854471 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 02:39:58.854477 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 02:39:58.854483 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 02:39:58.854539 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 02:39:58.854547 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 02:39:58.854597 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 02:39:58.854646 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T02:39:58 UTC (1776393598) Apr 17 02:39:58.854728 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 17 02:39:58.854736 kernel: intel_pstate: CPU model not supported Apr 17 02:39:58.854742 kernel: efifb: probing for efifb Apr 17 02:39:58.854748 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 17 02:39:58.854755 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 17 02:39:58.854760 kernel: efifb: scrolling: redraw Apr 17 02:39:58.854766 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 02:39:58.854772 kernel: Console: switching to colour frame buffer device 160x50 Apr 17 02:39:58.854778 kernel: fb0: EFI VGA frame buffer device Apr 17 02:39:58.854786 kernel: pstore: Using crash dump compression: deflate Apr 17 02:39:58.854792 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 02:39:58.854798 kernel: NET: Registered PF_INET6 protocol family Apr 17 02:39:58.854804 kernel: Segment Routing with IPv6 Apr 17 02:39:58.854810 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 02:39:58.854816 kernel: NET: Registered PF_PACKET protocol family Apr 17 02:39:58.854822 kernel: Key type dns_resolver registered Apr 17 02:39:58.854827 kernel: IPI shorthand broadcast: enabled Apr 17 02:39:58.854833 kernel: sched_clock: Marking stable (2951013091, 772217718)->(3940598485, -217367676) Apr 17 02:39:58.854841 kernel: registered taskstats version 1 Apr 17 02:39:58.854847 kernel: Loading compiled-in X.509 certificates Apr 17 02:39:58.854852 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 92f69eed5a22c94634d5240e5e65306547d4ba83' Apr 17 02:39:58.854858 kernel: Demotion targets for Node 0: null Apr 17 02:39:58.854864 kernel: Key type .fscrypt registered Apr 17 02:39:58.854870 kernel: Key type fscrypt-provisioning registered Apr 17 02:39:58.854876 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 02:39:58.854881 kernel: ima: Allocated hash algorithm: sha1 Apr 17 02:39:58.854887 kernel: ima: No architecture policies found Apr 17 02:39:58.854893 kernel: clk: Disabling unused clocks Apr 17 02:39:58.854900 kernel: Warning: unable to open an initial console. Apr 17 02:39:58.854906 kernel: Freeing unused kernel image (initmem) memory: 46216K Apr 17 02:39:58.854912 kernel: Write protecting the kernel read-only data: 40960k Apr 17 02:39:58.854918 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 17 02:39:58.854924 kernel: Run /init as init process Apr 17 02:39:58.854930 kernel: with arguments: Apr 17 02:39:58.854936 kernel: /init Apr 17 02:39:58.854942 kernel: with environment: Apr 17 02:39:58.854947 kernel: HOME=/ Apr 17 02:39:58.854954 kernel: TERM=linux Apr 17 02:39:58.854961 systemd[1]: Successfully made /usr/ read-only. Apr 17 02:39:58.854969 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 02:39:58.854976 systemd[1]: Detected virtualization kvm. Apr 17 02:39:58.854982 systemd[1]: Detected architecture x86-64. Apr 17 02:39:58.854988 systemd[1]: Running in initrd. Apr 17 02:39:58.854994 systemd[1]: No hostname configured, using default hostname. Apr 17 02:39:58.855001 systemd[1]: Hostname set to . Apr 17 02:39:58.855007 systemd[1]: Initializing machine ID from VM UUID. Apr 17 02:39:58.855013 systemd[1]: Queued start job for default target initrd.target. Apr 17 02:39:58.855020 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 02:39:58.855026 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 02:39:58.855034 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 02:39:58.855040 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 02:39:58.855047 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 02:39:58.855054 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 02:39:58.855061 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 02:39:58.855067 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 02:39:58.855074 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 02:39:58.855080 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 02:39:58.855086 systemd[1]: Reached target paths.target - Path Units. Apr 17 02:39:58.855092 systemd[1]: Reached target slices.target - Slice Units. Apr 17 02:39:58.855099 systemd[1]: Reached target swap.target - Swaps. Apr 17 02:39:58.855105 systemd[1]: Reached target timers.target - Timer Units. Apr 17 02:39:58.855111 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 02:39:58.855117 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 02:39:58.855124 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 02:39:58.855130 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 17 02:39:58.855136 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 02:39:58.855142 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 02:39:58.855148 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 02:39:58.855156 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 02:39:58.855162 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 02:39:58.855168 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 02:39:58.855174 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 02:39:58.855180 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 17 02:39:58.855187 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 02:39:58.855193 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 02:39:58.855199 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 02:39:58.855206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 02:39:58.855212 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 02:39:58.855219 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 02:39:58.855238 systemd-journald[203]: Collecting audit messages is disabled. Apr 17 02:39:58.855277 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 02:39:58.855284 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 02:39:58.855291 systemd-journald[203]: Journal started Apr 17 02:39:58.855307 systemd-journald[203]: Runtime Journal (/run/log/journal/4ca1a49df79941cbb67fa1a01a1762f7) is 6M, max 48.1M, 42.1M free. Apr 17 02:39:58.852343 systemd-modules-load[204]: Inserted module 'overlay' Apr 17 02:39:58.859639 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 02:39:58.859939 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 02:39:58.877714 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 02:39:58.879943 kernel: Bridge firewalling registered Apr 17 02:39:58.879759 systemd-modules-load[204]: Inserted module 'br_netfilter' Apr 17 02:39:58.879852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:39:58.882171 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 02:39:58.886847 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 02:39:58.891776 systemd-tmpfiles[212]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 17 02:39:58.892340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 02:39:58.892584 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 02:39:58.902473 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 02:39:58.904858 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 02:39:58.916107 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 02:39:58.918605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 02:39:58.925307 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 02:39:58.936217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 02:39:58.938284 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 02:39:58.946744 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 02:39:58.973044 systemd-resolved[243]: Positive Trust Anchors: Apr 17 02:39:58.973072 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 02:39:58.973097 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 02:39:58.974982 systemd-resolved[243]: Defaulting to hostname 'linux'. Apr 17 02:39:58.975701 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 02:39:58.976922 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 02:39:59.055734 kernel: SCSI subsystem initialized Apr 17 02:39:59.064764 kernel: Loading iSCSI transport class v2.0-870. Apr 17 02:39:59.075755 kernel: iscsi: registered transport (tcp) Apr 17 02:39:59.095142 kernel: iscsi: registered transport (qla4xxx) Apr 17 02:39:59.095203 kernel: QLogic iSCSI HBA Driver Apr 17 02:39:59.113460 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 02:39:59.132476 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 02:39:59.138643 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 02:39:59.179638 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 02:39:59.184419 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 02:39:59.238731 kernel: raid6: avx512x4 gen() 43316 MB/s Apr 17 02:39:59.255741 kernel: raid6: avx512x2 gen() 42479 MB/s Apr 17 02:39:59.272754 kernel: raid6: avx512x1 gen() 41749 MB/s Apr 17 02:39:59.289776 kernel: raid6: avx2x4 gen() 29071 MB/s Apr 17 02:39:59.307780 kernel: raid6: avx2x2 gen() 35686 MB/s Apr 17 02:39:59.326238 kernel: raid6: avx2x1 gen() 25728 MB/s Apr 17 02:39:59.326334 kernel: raid6: using algorithm avx512x4 gen() 43316 MB/s Apr 17 02:39:59.345172 kernel: raid6: .... xor() 10303 MB/s, rmw enabled Apr 17 02:39:59.345270 kernel: raid6: using avx512x2 recovery algorithm Apr 17 02:39:59.365779 kernel: xor: automatically using best checksumming function avx Apr 17 02:39:59.504741 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 02:39:59.511108 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 02:39:59.512453 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 02:39:59.542578 systemd-udevd[454]: Using default interface naming scheme 'v255'. Apr 17 02:39:59.546543 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 02:39:59.551720 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 02:39:59.579946 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Apr 17 02:39:59.604995 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 02:39:59.606509 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 02:39:59.657099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 02:39:59.662366 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 02:39:59.692842 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 02:39:59.697810 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 02:39:59.703220 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 02:39:59.711728 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 02:39:59.711781 kernel: GPT:9289727 != 19775487 Apr 17 02:39:59.711791 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 02:39:59.711798 kernel: GPT:9289727 != 19775487 Apr 17 02:39:59.711805 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 02:39:59.711812 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 02:39:59.711475 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 02:39:59.711556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:39:59.717757 kernel: libata version 3.00 loaded. Apr 17 02:39:59.720130 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 02:39:59.725221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 02:39:59.737730 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 17 02:39:59.742726 kernel: AES CTR mode by8 optimization enabled Apr 17 02:39:59.743082 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 02:39:59.765517 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 02:39:59.765724 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 02:39:59.765741 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 17 02:39:59.765825 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 17 02:39:59.765892 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 02:39:59.765957 kernel: scsi host0: ahci Apr 17 02:39:59.766026 kernel: scsi host1: ahci Apr 17 02:39:59.766091 kernel: scsi host2: ahci Apr 17 02:39:59.766152 kernel: scsi host3: ahci Apr 17 02:39:59.743170 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:39:59.774208 kernel: scsi host4: ahci Apr 17 02:39:59.774361 kernel: scsi host5: ahci Apr 17 02:39:59.774433 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Apr 17 02:39:59.759879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 02:39:59.788545 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Apr 17 02:39:59.788565 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Apr 17 02:39:59.788572 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Apr 17 02:39:59.788580 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Apr 17 02:39:59.788587 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Apr 17 02:39:59.781526 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 02:39:59.802193 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:39:59.811406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 02:39:59.817479 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 02:39:59.822411 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 02:39:59.822533 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 02:39:59.830940 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 02:39:59.853488 disk-uuid[637]: Primary Header is updated. Apr 17 02:39:59.853488 disk-uuid[637]: Secondary Entries is updated. Apr 17 02:39:59.853488 disk-uuid[637]: Secondary Header is updated. Apr 17 02:39:59.859232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 02:39:59.861724 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 02:40:00.090727 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 02:40:00.090808 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 02:40:00.092725 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 02:40:00.092798 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 02:40:00.094811 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 02:40:00.098649 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 02:40:00.098718 kernel: ata3.00: LPM support broken, forcing max_power Apr 17 02:40:00.098730 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 02:40:00.099785 kernel: ata3.00: applying bridge limits Apr 17 02:40:00.100865 kernel: ata3.00: LPM support broken, forcing max_power Apr 17 02:40:00.102902 kernel: ata3.00: configured for UDMA/100 Apr 17 02:40:00.103748 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 02:40:00.147046 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 02:40:00.147296 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 02:40:00.159838 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 02:40:00.442652 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 02:40:00.444921 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 02:40:00.448570 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 02:40:00.450612 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 02:40:00.451448 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 02:40:00.478724 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 02:40:00.864451 disk-uuid[638]: The operation has completed successfully. Apr 17 02:40:00.866739 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 02:40:00.892530 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 02:40:00.892637 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 02:40:00.914599 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 02:40:00.929946 sh[674]: Success Apr 17 02:40:00.949435 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 02:40:00.949497 kernel: device-mapper: uevent: version 1.0.3 Apr 17 02:40:00.951243 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 17 02:40:00.960768 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 02:40:00.984575 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 02:40:00.989127 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 02:40:01.005999 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 02:40:01.013002 kernel: BTRFS: device fsid d1542dca-1171-4bcf-9aae-d85dd05fe503 devid 1 transid 32 /dev/mapper/usr (253:0) scanned by mount (686) Apr 17 02:40:01.016592 kernel: BTRFS info (device dm-0): first mount of filesystem d1542dca-1171-4bcf-9aae-d85dd05fe503 Apr 17 02:40:01.016635 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 02:40:01.023932 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 17 02:40:01.023979 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 17 02:40:01.025244 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 02:40:01.030122 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 17 02:40:01.034334 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 02:40:01.035038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 02:40:01.041362 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 02:40:01.062732 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (717) Apr 17 02:40:01.066637 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 02:40:01.066726 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 02:40:01.072176 kernel: BTRFS info (device vda6): turning on async discard Apr 17 02:40:01.072222 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 02:40:01.078722 kernel: BTRFS info (device vda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 02:40:01.078831 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 02:40:01.083495 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 02:40:01.161640 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 02:40:01.162982 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 02:40:01.173130 ignition[778]: Ignition 2.22.0 Apr 17 02:40:01.173153 ignition[778]: Stage: fetch-offline Apr 17 02:40:01.173180 ignition[778]: no configs at "/usr/lib/ignition/base.d" Apr 17 02:40:01.173186 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:40:01.173293 ignition[778]: parsed url from cmdline: "" Apr 17 02:40:01.173295 ignition[778]: no config URL provided Apr 17 02:40:01.173300 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 02:40:01.173309 ignition[778]: no config at "/usr/lib/ignition/user.ign" Apr 17 02:40:01.173329 ignition[778]: op(1): [started] loading QEMU firmware config module Apr 17 02:40:01.173332 ignition[778]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 02:40:01.181362 ignition[778]: op(1): [finished] loading QEMU firmware config module Apr 17 02:40:01.206012 systemd-networkd[863]: lo: Link UP Apr 17 02:40:01.206061 systemd-networkd[863]: lo: Gained carrier Apr 17 02:40:01.207019 systemd-networkd[863]: Enumeration completed Apr 17 02:40:01.207753 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 02:40:01.209388 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 02:40:01.209393 systemd-networkd[863]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 02:40:01.210511 systemd-networkd[863]: eth0: Link UP Apr 17 02:40:01.211130 systemd-networkd[863]: eth0: Gained carrier Apr 17 02:40:01.211141 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 02:40:01.211567 systemd[1]: Reached target network.target - Network. Apr 17 02:40:01.231753 systemd-networkd[863]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 02:40:01.319794 ignition[778]: parsing config with SHA512: aeb9e5db69920c10258fdfa8052970d01455f1aeb3764979dc9841fbb42c3975ce07c5571fa3773e596397e51c740b912498a1dc10fd466e58dceb307deaae23 Apr 17 02:40:01.323839 unknown[778]: fetched base config from "system" Apr 17 02:40:01.324063 unknown[778]: fetched user config from "qemu" Apr 17 02:40:01.325608 ignition[778]: fetch-offline: fetch-offline passed Apr 17 02:40:01.325714 ignition[778]: Ignition finished successfully Apr 17 02:40:01.331036 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 02:40:01.331285 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 02:40:01.332065 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 02:40:01.373215 ignition[870]: Ignition 2.22.0 Apr 17 02:40:01.373240 ignition[870]: Stage: kargs Apr 17 02:40:01.373384 ignition[870]: no configs at "/usr/lib/ignition/base.d" Apr 17 02:40:01.373391 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:40:01.374052 ignition[870]: kargs: kargs passed Apr 17 02:40:01.374083 ignition[870]: Ignition finished successfully Apr 17 02:40:01.382804 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 02:40:01.383903 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 02:40:01.426889 ignition[879]: Ignition 2.22.0 Apr 17 02:40:01.426914 ignition[879]: Stage: disks Apr 17 02:40:01.427018 ignition[879]: no configs at "/usr/lib/ignition/base.d" Apr 17 02:40:01.427024 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:40:01.427576 ignition[879]: disks: disks passed Apr 17 02:40:01.431294 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 02:40:01.427607 ignition[879]: Ignition finished successfully Apr 17 02:40:01.434161 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 02:40:01.437415 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 02:40:01.441914 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 02:40:01.445750 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 02:40:01.450929 systemd[1]: Reached target basic.target - Basic System. Apr 17 02:40:01.455969 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 02:40:01.483088 systemd-fsck[889]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 17 02:40:01.487895 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 02:40:01.488717 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 02:40:01.703752 kernel: EXT4-fs (vda9): mounted filesystem ee420a69-62b9-42f4-84c7-ea3f2d87c569 r/w with ordered data mode. Quota mode: none. Apr 17 02:40:01.704061 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 02:40:01.707328 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 02:40:01.710215 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 02:40:01.713437 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 02:40:01.715851 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 02:40:01.725787 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (897) Apr 17 02:40:01.715883 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 02:40:01.737590 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 02:40:01.737608 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 02:40:01.737616 kernel: BTRFS info (device vda6): turning on async discard Apr 17 02:40:01.737623 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 02:40:01.715900 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 02:40:01.721052 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 02:40:01.726527 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 02:40:01.738396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 02:40:01.763652 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 02:40:01.768417 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Apr 17 02:40:01.775587 initrd-setup-root[935]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 02:40:01.782304 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 02:40:01.832155 systemd-resolved[243]: Detected conflict on linux IN A 10.0.0.8 Apr 17 02:40:01.832183 systemd-resolved[243]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Apr 17 02:40:01.860111 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 02:40:01.862845 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 02:40:01.866256 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 02:40:01.881695 kernel: BTRFS info (device vda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 02:40:01.891437 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 02:40:01.903628 ignition[1011]: INFO : Ignition 2.22.0 Apr 17 02:40:01.903628 ignition[1011]: INFO : Stage: mount Apr 17 02:40:01.906151 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 02:40:01.906151 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:40:01.906151 ignition[1011]: INFO : mount: mount passed Apr 17 02:40:01.906151 ignition[1011]: INFO : Ignition finished successfully Apr 17 02:40:01.912846 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 02:40:01.916889 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 02:40:02.010867 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 02:40:02.012349 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 02:40:02.030742 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Apr 17 02:40:02.034353 kernel: BTRFS info (device vda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 02:40:02.034380 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 02:40:02.038713 kernel: BTRFS info (device vda6): turning on async discard Apr 17 02:40:02.038751 kernel: BTRFS info (device vda6): enabling free space tree Apr 17 02:40:02.040092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 02:40:02.085741 ignition[1041]: INFO : Ignition 2.22.0 Apr 17 02:40:02.085741 ignition[1041]: INFO : Stage: files Apr 17 02:40:02.088635 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 02:40:02.088635 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:40:02.088635 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Apr 17 02:40:02.094623 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 02:40:02.094623 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 02:40:02.099603 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 02:40:02.102197 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 02:40:02.104851 unknown[1041]: wrote ssh authorized keys file for user: core Apr 17 02:40:02.106896 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 02:40:02.106896 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 02:40:02.106896 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 02:40:02.180895 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 02:40:02.294815 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 02:40:02.294815 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 02:40:02.301546 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 02:40:02.419568 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 02:40:02.824985 systemd-networkd[863]: eth0: Gained IPv6LL Apr 17 02:40:03.128493 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 02:40:03.128493 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 02:40:03.134735 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 02:40:03.138169 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 02:40:03.138169 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 02:40:03.138169 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 02:40:03.145315 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 02:40:03.148847 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 02:40:03.148847 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 02:40:03.148847 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 02:40:03.175819 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 02:40:03.179168 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 02:40:03.181935 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 02:40:03.181935 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 17 02:40:03.186495 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 02:40:03.186495 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 02:40:03.186495 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 02:40:03.186495 ignition[1041]: INFO : files: files passed Apr 17 02:40:03.186495 ignition[1041]: INFO : Ignition finished successfully Apr 17 02:40:03.189555 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 02:40:03.194502 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 02:40:03.207054 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 02:40:03.209835 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 02:40:03.209915 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 02:40:03.235002 initrd-setup-root-after-ignition[1071]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 02:40:03.239216 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 02:40:03.239216 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 02:40:03.246299 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 02:40:03.241590 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 02:40:03.242315 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 02:40:03.251189 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 02:40:03.313053 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 02:40:03.313171 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 02:40:03.316811 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 02:40:03.320254 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 02:40:03.323360 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 02:40:03.327619 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 02:40:03.360144 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 02:40:03.365156 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 02:40:03.388427 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 02:40:03.388632 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 02:40:03.393201 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 02:40:03.398540 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 02:40:03.398884 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 02:40:03.405758 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 02:40:03.405941 systemd[1]: Stopped target basic.target - Basic System. Apr 17 02:40:03.409450 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 02:40:03.413991 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 02:40:03.419709 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 02:40:03.419887 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 17 02:40:03.423853 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 02:40:03.428937 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 02:40:03.430547 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 02:40:03.434361 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 02:40:03.437844 systemd[1]: Stopped target swap.target - Swaps. Apr 17 02:40:03.441173 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 02:40:03.441504 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 02:40:03.447061 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 02:40:03.448996 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 02:40:03.454591 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 02:40:03.454820 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 02:40:03.458415 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 02:40:03.458540 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 02:40:03.464115 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 02:40:03.464233 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 02:40:03.469900 systemd[1]: Stopped target paths.target - Path Units. Apr 17 02:40:03.471310 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 02:40:03.475740 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 02:40:03.482131 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 02:40:03.483953 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 02:40:03.486903 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 02:40:03.487048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 02:40:03.490010 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 02:40:03.490135 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 02:40:03.493166 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 02:40:03.493329 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 02:40:03.498149 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 02:40:03.498298 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 02:40:03.506045 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 02:40:03.508186 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 02:40:03.508448 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 02:40:03.508559 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 02:40:03.509076 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 02:40:03.509175 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 02:40:03.512809 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 02:40:03.512888 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 02:40:03.898527 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 02:40:03.937743 ignition[1097]: INFO : Ignition 2.22.0 Apr 17 02:40:03.939818 ignition[1097]: INFO : Stage: umount Apr 17 02:40:03.939818 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 02:40:03.939818 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 02:40:03.946691 ignition[1097]: INFO : umount: umount passed Apr 17 02:40:03.946691 ignition[1097]: INFO : Ignition finished successfully Apr 17 02:40:03.948763 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 02:40:03.948869 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 02:40:03.954857 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 02:40:03.954976 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 02:40:03.960496 systemd[1]: Stopped target network.target - Network. Apr 17 02:40:03.960604 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 02:40:03.960640 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 02:40:03.964095 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 02:40:03.964127 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 02:40:03.971908 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 02:40:03.971952 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 02:40:03.973419 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 02:40:03.973451 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 02:40:03.976422 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 02:40:03.976455 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 02:40:03.982807 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 02:40:03.985895 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 02:40:03.993472 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 02:40:03.993640 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 02:40:03.999221 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 17 02:40:04.000207 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 02:40:04.000317 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 02:40:04.006836 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 17 02:40:04.007188 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 02:40:04.007449 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 02:40:04.010805 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 17 02:40:04.011373 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 17 02:40:04.015319 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 02:40:04.015355 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 02:40:04.020865 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 02:40:04.025099 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 02:40:04.025148 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 02:40:04.030256 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 02:40:04.030325 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 02:40:04.035117 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 02:40:04.035167 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 02:40:04.037353 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 02:40:04.044506 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 17 02:40:04.053649 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 02:40:04.053846 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 02:40:04.070559 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 02:40:04.070738 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 02:40:04.073176 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 02:40:04.073201 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 02:40:04.081296 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 02:40:04.081325 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 02:40:04.082844 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 02:40:04.082884 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 02:40:04.093084 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 02:40:04.093385 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 02:40:04.099769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 02:40:04.099832 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 02:40:04.107366 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 02:40:04.107451 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 17 02:40:04.107484 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 02:40:04.114842 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 02:40:04.114886 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 02:40:04.122860 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 02:40:04.122925 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 02:40:04.129128 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 02:40:04.129175 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 02:40:04.132618 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 02:40:04.132647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:40:04.138574 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 02:40:04.138729 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 02:40:04.140268 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 02:40:04.147479 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 02:40:04.165850 systemd[1]: Switching root. Apr 17 02:40:04.192501 systemd-journald[203]: Journal stopped Apr 17 02:40:05.170480 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Apr 17 02:40:05.170528 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 02:40:05.170539 kernel: SELinux: policy capability open_perms=1 Apr 17 02:40:05.170548 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 02:40:05.170557 kernel: SELinux: policy capability always_check_network=0 Apr 17 02:40:05.170569 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 02:40:05.170579 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 02:40:05.170587 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 02:40:05.170594 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 02:40:05.170602 kernel: SELinux: policy capability userspace_initial_context=0 Apr 17 02:40:05.170609 kernel: audit: type=1403 audit(1776393604.327:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 02:40:05.170621 systemd[1]: Successfully loaded SELinux policy in 52.674ms. Apr 17 02:40:05.170636 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.608ms. Apr 17 02:40:05.170644 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 02:40:05.170652 systemd[1]: Detected virtualization kvm. Apr 17 02:40:05.170660 systemd[1]: Detected architecture x86-64. Apr 17 02:40:05.170789 systemd[1]: Detected first boot. Apr 17 02:40:05.170802 systemd[1]: Initializing machine ID from VM UUID. Apr 17 02:40:05.170810 zram_generator::config[1143]: No configuration found. Apr 17 02:40:05.170819 kernel: Guest personality initialized and is inactive Apr 17 02:40:05.170826 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 17 02:40:05.170838 kernel: Initialized host personality Apr 17 02:40:05.170845 kernel: NET: Registered PF_VSOCK protocol family Apr 17 02:40:05.170853 systemd[1]: Populated /etc with preset unit settings. Apr 17 02:40:05.170861 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 17 02:40:05.170869 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 02:40:05.170877 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 02:40:05.170885 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 02:40:05.170893 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 02:40:05.170901 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 02:40:05.170911 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 02:40:05.170918 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 02:40:05.170926 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 02:40:05.170934 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 02:40:05.170942 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 02:40:05.170949 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 02:40:05.170957 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 02:40:05.170965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 02:40:05.170974 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 02:40:05.170982 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 02:40:05.170989 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 02:40:05.170997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 02:40:05.171006 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 02:40:05.171014 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 02:40:05.171022 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 02:40:05.171030 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 02:40:05.171039 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 02:40:05.171047 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 02:40:05.171055 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 02:40:05.171062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 02:40:05.171070 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 02:40:05.171078 systemd[1]: Reached target slices.target - Slice Units. Apr 17 02:40:05.171086 systemd[1]: Reached target swap.target - Swaps. Apr 17 02:40:05.171094 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 02:40:05.171102 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 02:40:05.171111 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 17 02:40:05.171119 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 02:40:05.171126 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 02:40:05.171134 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 02:40:05.171141 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 02:40:05.171149 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 02:40:05.171157 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 02:40:05.171164 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 02:40:05.171173 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:40:05.171185 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 02:40:05.171194 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 02:40:05.171202 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 02:40:05.171209 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 02:40:05.171217 systemd[1]: Reached target machines.target - Containers. Apr 17 02:40:05.171224 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 02:40:05.171232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 02:40:05.171240 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 02:40:05.171247 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 02:40:05.171256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 02:40:05.171264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 02:40:05.171271 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 02:40:05.171299 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 02:40:05.171308 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 02:40:05.171316 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 02:40:05.171324 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 02:40:05.171332 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 02:40:05.171341 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 02:40:05.171348 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 02:40:05.171356 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 02:40:05.171365 kernel: fuse: init (API version 7.41) Apr 17 02:40:05.171372 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 02:40:05.171380 kernel: loop: module loaded Apr 17 02:40:05.171387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 02:40:05.171395 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 02:40:05.171402 kernel: ACPI: bus type drm_connector registered Apr 17 02:40:05.171411 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 02:40:05.171419 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 17 02:40:05.171427 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 02:40:05.171436 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 02:40:05.171459 systemd-journald[1225]: Collecting audit messages is disabled. Apr 17 02:40:05.171478 systemd[1]: Stopped verity-setup.service. Apr 17 02:40:05.171487 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:40:05.171496 systemd-journald[1225]: Journal started Apr 17 02:40:05.171512 systemd-journald[1225]: Runtime Journal (/run/log/journal/4ca1a49df79941cbb67fa1a01a1762f7) is 6M, max 48.1M, 42.1M free. Apr 17 02:40:04.862826 systemd[1]: Queued start job for default target multi-user.target. Apr 17 02:40:04.878933 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 02:40:04.879533 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 02:40:05.179368 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 02:40:05.180709 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 02:40:05.182561 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 02:40:05.184879 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 02:40:05.186762 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 02:40:05.189045 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 02:40:05.190994 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 02:40:05.192850 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 02:40:05.195545 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 02:40:05.198075 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 02:40:05.198313 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 02:40:05.202188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 02:40:05.202866 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 02:40:05.205146 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 02:40:05.205317 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 02:40:05.207254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 02:40:05.207423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 02:40:05.209651 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 02:40:05.209829 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 02:40:05.212020 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 02:40:05.212178 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 02:40:05.214352 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 02:40:05.216589 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 02:40:05.219013 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 02:40:05.221614 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 17 02:40:05.230037 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 02:40:05.233864 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 02:40:05.237409 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 02:40:05.240066 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 02:40:05.242268 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 02:40:05.242348 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 02:40:05.245048 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 17 02:40:05.250838 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 02:40:05.252608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 02:40:05.253662 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 02:40:05.256467 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 02:40:05.258444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 02:40:05.259307 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 02:40:05.261136 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 02:40:05.262794 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 02:40:05.267838 systemd-journald[1225]: Time spent on flushing to /var/log/journal/4ca1a49df79941cbb67fa1a01a1762f7 is 21.071ms for 1070 entries. Apr 17 02:40:05.267838 systemd-journald[1225]: System Journal (/var/log/journal/4ca1a49df79941cbb67fa1a01a1762f7) is 8M, max 195.6M, 187.6M free. Apr 17 02:40:05.306486 systemd-journald[1225]: Received client request to flush runtime journal. Apr 17 02:40:05.306531 kernel: loop0: detected capacity change from 0 to 228704 Apr 17 02:40:05.266991 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 02:40:05.272782 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 02:40:05.277259 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 02:40:05.279554 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 02:40:05.307072 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 02:40:05.309605 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 02:40:05.314567 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 02:40:05.318196 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 17 02:40:05.320584 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 02:40:05.327464 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Apr 17 02:40:05.327492 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Apr 17 02:40:05.331857 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 02:40:05.340345 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 02:40:05.340747 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 02:40:05.447822 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 17 02:40:05.457725 kernel: loop1: detected capacity change from 0 to 128560 Apr 17 02:40:05.461926 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 02:40:05.468974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 02:40:05.488386 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Apr 17 02:40:05.488412 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Apr 17 02:40:05.491787 kernel: loop2: detected capacity change from 0 to 110984 Apr 17 02:40:05.491057 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 02:40:05.552863 kernel: loop3: detected capacity change from 0 to 228704 Apr 17 02:40:05.571734 kernel: loop4: detected capacity change from 0 to 128560 Apr 17 02:40:05.584730 kernel: loop5: detected capacity change from 0 to 110984 Apr 17 02:40:05.594737 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 02:40:05.595163 (sd-merge)[1290]: Merged extensions into '/usr'. Apr 17 02:40:05.607216 systemd[1]: Reload requested from client PID 1263 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 02:40:05.607251 systemd[1]: Reloading... Apr 17 02:40:05.656716 zram_generator::config[1312]: No configuration found. Apr 17 02:40:05.910025 ldconfig[1258]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 02:40:06.071886 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 02:40:06.072848 systemd[1]: Reloading finished in 465 ms. Apr 17 02:40:06.095801 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 02:40:06.099334 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 02:40:06.117357 systemd[1]: Starting ensure-sysext.service... Apr 17 02:40:06.120226 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 02:40:06.149210 systemd[1]: Reload requested from client PID 1353 ('systemctl') (unit ensure-sysext.service)... Apr 17 02:40:06.149237 systemd[1]: Reloading... Apr 17 02:40:06.152539 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 17 02:40:06.152575 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 17 02:40:06.152779 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 02:40:06.152955 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 02:40:06.153478 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 02:40:06.153788 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Apr 17 02:40:06.153835 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Apr 17 02:40:06.155915 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 02:40:06.155968 systemd-tmpfiles[1354]: Skipping /boot Apr 17 02:40:06.162499 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 02:40:06.162507 systemd-tmpfiles[1354]: Skipping /boot Apr 17 02:40:06.210857 zram_generator::config[1381]: No configuration found. Apr 17 02:40:06.346057 systemd[1]: Reloading finished in 196 ms. Apr 17 02:40:06.358755 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 02:40:06.365624 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 02:40:06.376481 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 02:40:06.379628 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 02:40:06.382574 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 02:40:06.386385 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 02:40:06.389843 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 02:40:06.393806 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 02:40:06.401553 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:40:06.401851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 02:40:06.412160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 02:40:06.415786 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 02:40:06.419130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 02:40:06.420956 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 02:40:06.421121 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 02:40:06.424101 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 02:40:06.425830 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:40:06.435396 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 02:40:06.437153 systemd-udevd[1427]: Using default interface naming scheme 'v255'. Apr 17 02:40:06.438131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 02:40:06.438238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 02:40:06.440872 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 02:40:06.443485 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 02:40:06.443623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 02:40:06.444471 augenrules[1449]: No rules Apr 17 02:40:06.446069 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 02:40:06.446231 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 02:40:06.448165 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 02:40:06.448390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 02:40:06.458221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:40:06.458383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 02:40:06.459367 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 02:40:06.464833 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 02:40:06.470874 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 02:40:06.472728 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 02:40:06.472812 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 02:40:06.473872 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 02:40:06.476765 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 02:40:06.476862 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:40:06.477528 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 02:40:06.584981 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 02:40:06.860793 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 02:40:06.867626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 02:40:06.867822 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 02:40:06.871143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 02:40:06.871713 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 02:40:06.874404 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 02:40:06.874809 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 02:40:06.881077 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 02:40:06.980352 systemd[1]: Finished ensure-sysext.service. Apr 17 02:40:06.986749 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 02:40:06.993135 systemd-resolved[1424]: Positive Trust Anchors: Apr 17 02:40:06.993428 systemd-resolved[1424]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 02:40:06.993489 systemd-resolved[1424]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 02:40:06.997183 systemd-resolved[1424]: Defaulting to hostname 'linux'. Apr 17 02:40:06.997758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 02:40:07.000958 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 02:40:07.001344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 02:40:07.005972 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 02:40:07.008404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:40:07.011167 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 02:40:07.014850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 02:40:07.016143 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 02:40:07.018979 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 02:40:07.020855 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 02:40:07.025267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 02:40:07.027949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 02:40:07.029902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 02:40:07.031524 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 02:40:07.033833 kernel: ACPI: button: Power Button [PWRF] Apr 17 02:40:07.034483 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 02:40:07.035737 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 02:40:07.039852 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 02:40:07.042395 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 02:40:07.042440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 02:40:07.042979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 02:40:07.046719 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 17 02:40:07.047014 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 02:40:07.047408 augenrules[1512]: /sbin/augenrules: No change Apr 17 02:40:07.049438 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 02:40:07.049995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 02:40:07.052332 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 02:40:07.052490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 02:40:07.057755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 02:40:07.057947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 02:40:07.060841 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 02:40:07.061059 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 02:40:07.070248 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 02:40:07.070352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 02:40:07.081820 augenrules[1548]: No rules Apr 17 02:40:07.077241 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 02:40:07.082957 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 02:40:07.083167 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 02:40:07.132034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 02:40:07.175350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 02:40:07.176779 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:40:07.181855 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 02:40:07.188025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 02:40:07.283415 systemd-networkd[1526]: lo: Link UP Apr 17 02:40:07.283641 systemd-networkd[1526]: lo: Gained carrier Apr 17 02:40:07.283976 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 02:40:07.284133 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 02:40:07.284834 systemd-networkd[1526]: Enumeration completed Apr 17 02:40:07.285778 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 02:40:07.286373 systemd[1]: Reached target network.target - Network. Apr 17 02:40:07.287795 systemd-networkd[1526]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 02:40:07.287814 systemd-networkd[1526]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 02:40:07.288355 systemd-networkd[1526]: eth0: Link UP Apr 17 02:40:07.288499 systemd-networkd[1526]: eth0: Gained carrier Apr 17 02:40:07.288527 systemd-networkd[1526]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 02:40:07.290142 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 17 02:40:07.292861 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 02:40:07.295116 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 02:40:07.302796 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 02:40:07.306726 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 02:40:07.310507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 02:40:07.313147 systemd-networkd[1526]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 02:40:07.313729 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 17 02:40:07.315847 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Apr 17 02:40:07.316666 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 02:40:07.996181 systemd-timesyncd[1527]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 02:40:07.996199 systemd-resolved[1424]: Clock change detected. Flushing caches. Apr 17 02:40:07.996242 systemd-timesyncd[1527]: Initial clock synchronization to Fri 2026-04-17 02:40:07.996047 UTC. Apr 17 02:40:07.996854 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 02:40:07.999590 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 02:40:08.001993 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 02:40:08.002051 systemd[1]: Reached target paths.target - Path Units. Apr 17 02:40:08.004705 systemd[1]: Reached target timers.target - Timer Units. Apr 17 02:40:08.009402 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 02:40:08.014780 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 02:40:08.020771 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 17 02:40:08.023188 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 17 02:40:08.025357 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 17 02:40:08.033217 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 02:40:08.036688 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 17 02:40:08.041881 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 02:40:08.049523 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 02:40:08.051869 systemd[1]: Reached target basic.target - Basic System. Apr 17 02:40:08.053615 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 02:40:08.053652 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 02:40:08.055501 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 02:40:08.061183 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 02:40:08.064914 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 02:40:08.067973 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 02:40:08.070654 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 02:40:08.072975 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 02:40:08.074791 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 17 02:40:08.077055 jq[1583]: false Apr 17 02:40:08.078426 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 02:40:08.082035 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 02:40:08.085093 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 02:40:08.088522 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 02:40:08.093075 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 02:40:08.095835 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 02:40:08.096218 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 02:40:08.096973 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 02:40:08.101731 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 02:40:08.102433 oslogin_cache_refresh[1585]: Refreshing passwd entry cache Apr 17 02:40:08.102898 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Refreshing passwd entry cache Apr 17 02:40:08.104687 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 17 02:40:08.107980 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 02:40:08.109505 extend-filesystems[1584]: Found /dev/vda6 Apr 17 02:40:08.110411 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 02:40:08.111732 oslogin_cache_refresh[1585]: Failure getting users, quitting Apr 17 02:40:08.112411 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Failure getting users, quitting Apr 17 02:40:08.112411 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 02:40:08.112411 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Refreshing group entry cache Apr 17 02:40:08.110608 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 02:40:08.112481 extend-filesystems[1584]: Found /dev/vda9 Apr 17 02:40:08.111757 oslogin_cache_refresh[1585]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 02:40:08.110793 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 02:40:08.111793 oslogin_cache_refresh[1585]: Refreshing group entry cache Apr 17 02:40:08.115287 extend-filesystems[1584]: Checking size of /dev/vda9 Apr 17 02:40:08.117569 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 02:40:08.119708 oslogin_cache_refresh[1585]: Failure getting groups, quitting Apr 17 02:40:08.120514 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 02:40:08.121101 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Failure getting groups, quitting Apr 17 02:40:08.121101 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 02:40:08.119718 oslogin_cache_refresh[1585]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 02:40:08.121051 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 02:40:08.124584 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 17 02:40:08.124756 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 17 02:40:08.139008 jq[1600]: true Apr 17 02:40:08.139243 update_engine[1599]: I20260417 02:40:08.136620 1599 main.cc:92] Flatcar Update Engine starting Apr 17 02:40:08.135687 (ntainerd)[1607]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 02:40:08.145128 jq[1612]: true Apr 17 02:40:08.230306 kernel: hrtimer: interrupt took 2826397 ns Apr 17 02:40:08.252765 tar[1605]: linux-amd64/LICENSE Apr 17 02:40:08.254068 tar[1605]: linux-amd64/helm Apr 17 02:40:08.478216 dbus-daemon[1581]: [system] SELinux support is enabled Apr 17 02:40:08.478689 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 02:40:08.483167 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 02:40:08.483788 update_engine[1599]: I20260417 02:40:08.482975 1599 update_check_scheduler.cc:74] Next update check in 2m22s Apr 17 02:40:08.483230 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 02:40:08.486127 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 02:40:08.486162 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 02:40:08.489071 extend-filesystems[1584]: Resized partition /dev/vda9 Apr 17 02:40:08.492741 systemd[1]: Started update-engine.service - Update Engine. Apr 17 02:40:08.493688 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Apr 17 02:40:08.497346 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 02:40:08.502537 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 02:40:08.504325 extend-filesystems[1642]: resize2fs 1.47.3 (8-Jul-2025) Apr 17 02:40:08.506904 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 02:40:08.512028 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 02:40:08.622976 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 02:40:08.648111 extend-filesystems[1642]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 02:40:08.648111 extend-filesystems[1642]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 02:40:08.648111 extend-filesystems[1642]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 02:40:08.670403 extend-filesystems[1584]: Resized filesystem in /dev/vda9 Apr 17 02:40:08.651844 systemd-logind[1598]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 02:40:08.651856 systemd-logind[1598]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 02:40:08.653108 systemd-logind[1598]: New seat seat0. Apr 17 02:40:08.654780 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 02:40:08.664618 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 02:40:08.664778 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 02:40:08.681336 locksmithd[1644]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 02:40:09.016014 sshd_keygen[1623]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 02:40:09.046060 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 02:40:09.049490 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 02:40:09.062462 containerd[1607]: time="2026-04-17T02:40:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 17 02:40:09.063271 containerd[1607]: time="2026-04-17T02:40:09.063235018Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 17 02:40:09.071826 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 02:40:09.072108 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 02:40:09.076141 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 02:40:09.085146 containerd[1607]: time="2026-04-17T02:40:09.085024963Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.445µs" Apr 17 02:40:09.085146 containerd[1607]: time="2026-04-17T02:40:09.085105571Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 17 02:40:09.085146 containerd[1607]: time="2026-04-17T02:40:09.085130292Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 17 02:40:09.085338 containerd[1607]: time="2026-04-17T02:40:09.085306025Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 17 02:40:09.085384 containerd[1607]: time="2026-04-17T02:40:09.085358572Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 17 02:40:09.085532 containerd[1607]: time="2026-04-17T02:40:09.085395013Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 02:40:09.085532 containerd[1607]: time="2026-04-17T02:40:09.085456399Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 02:40:09.085532 containerd[1607]: time="2026-04-17T02:40:09.085465023Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 02:40:09.085788 containerd[1607]: time="2026-04-17T02:40:09.085754486Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 02:40:09.085788 containerd[1607]: time="2026-04-17T02:40:09.085786074Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 02:40:09.085827 containerd[1607]: time="2026-04-17T02:40:09.085795295Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 02:40:09.085827 containerd[1607]: time="2026-04-17T02:40:09.085801194Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 17 02:40:09.085983 containerd[1607]: time="2026-04-17T02:40:09.085913657Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 17 02:40:09.086209 containerd[1607]: time="2026-04-17T02:40:09.086176758Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 02:40:09.086228 containerd[1607]: time="2026-04-17T02:40:09.086215699Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 02:40:09.086228 containerd[1607]: time="2026-04-17T02:40:09.086223982Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 17 02:40:09.086350 containerd[1607]: time="2026-04-17T02:40:09.086316955Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 17 02:40:09.086739 containerd[1607]: time="2026-04-17T02:40:09.086712291Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 17 02:40:09.086812 containerd[1607]: time="2026-04-17T02:40:09.086788332Z" level=info msg="metadata content store policy set" policy=shared Apr 17 02:40:09.094498 containerd[1607]: time="2026-04-17T02:40:09.094188329Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 17 02:40:09.094498 containerd[1607]: time="2026-04-17T02:40:09.094588790Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 17 02:40:09.094787 containerd[1607]: time="2026-04-17T02:40:09.094618061Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 17 02:40:09.094787 containerd[1607]: time="2026-04-17T02:40:09.094628345Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 17 02:40:09.094787 containerd[1607]: time="2026-04-17T02:40:09.094640332Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 17 02:40:09.094787 containerd[1607]: time="2026-04-17T02:40:09.094649228Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 17 02:40:09.094787 containerd[1607]: time="2026-04-17T02:40:09.094657412Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 17 02:40:09.094787 containerd[1607]: time="2026-04-17T02:40:09.094677817Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 17 02:40:09.094787 containerd[1607]: time="2026-04-17T02:40:09.094686857Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 17 02:40:09.094787 containerd[1607]: time="2026-04-17T02:40:09.094695826Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 17 02:40:09.094787 containerd[1607]: time="2026-04-17T02:40:09.094702620Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 17 02:40:09.094787 containerd[1607]: time="2026-04-17T02:40:09.094712314Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 17 02:40:09.095082 containerd[1607]: time="2026-04-17T02:40:09.094962653Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 17 02:40:09.095082 containerd[1607]: time="2026-04-17T02:40:09.094984766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 17 02:40:09.095082 containerd[1607]: time="2026-04-17T02:40:09.094996224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 17 02:40:09.095082 containerd[1607]: time="2026-04-17T02:40:09.095021576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 17 02:40:09.095082 containerd[1607]: time="2026-04-17T02:40:09.095029513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 17 02:40:09.095082 containerd[1607]: time="2026-04-17T02:40:09.095036604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 17 02:40:09.095082 containerd[1607]: time="2026-04-17T02:40:09.095045439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 17 02:40:09.095082 containerd[1607]: time="2026-04-17T02:40:09.095066838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 17 02:40:09.095082 containerd[1607]: time="2026-04-17T02:40:09.095076359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 17 02:40:09.095082 containerd[1607]: time="2026-04-17T02:40:09.095083355Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 17 02:40:09.095338 containerd[1607]: time="2026-04-17T02:40:09.095091487Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 17 02:40:09.095338 containerd[1607]: time="2026-04-17T02:40:09.095292807Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 17 02:40:09.095338 containerd[1607]: time="2026-04-17T02:40:09.095305341Z" level=info msg="Start snapshots syncer" Apr 17 02:40:09.095377 containerd[1607]: time="2026-04-17T02:40:09.095345319Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 17 02:40:09.095785 containerd[1607]: time="2026-04-17T02:40:09.095751622Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 17 02:40:09.096022 containerd[1607]: time="2026-04-17T02:40:09.095860608Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 17 02:40:09.099745 containerd[1607]: time="2026-04-17T02:40:09.099464324Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 17 02:40:09.099834 containerd[1607]: time="2026-04-17T02:40:09.099797654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 17 02:40:09.099913 containerd[1607]: time="2026-04-17T02:40:09.099886809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 17 02:40:09.100008 containerd[1607]: time="2026-04-17T02:40:09.099980412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 17 02:40:09.100025 containerd[1607]: time="2026-04-17T02:40:09.100006854Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 17 02:40:09.100025 containerd[1607]: time="2026-04-17T02:40:09.100020230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 17 02:40:09.100051 containerd[1607]: time="2026-04-17T02:40:09.100032164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 17 02:40:09.100215 containerd[1607]: time="2026-04-17T02:40:09.100188314Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 17 02:40:09.100281 containerd[1607]: time="2026-04-17T02:40:09.100258159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 17 02:40:09.100301 containerd[1607]: time="2026-04-17T02:40:09.100280548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 17 02:40:09.100301 containerd[1607]: time="2026-04-17T02:40:09.100294356Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 17 02:40:09.100422 containerd[1607]: time="2026-04-17T02:40:09.100395616Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 02:40:09.100475 containerd[1607]: time="2026-04-17T02:40:09.100448682Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 02:40:09.100475 containerd[1607]: time="2026-04-17T02:40:09.100471897Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 02:40:09.100504 containerd[1607]: time="2026-04-17T02:40:09.100483360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 02:40:09.100504 containerd[1607]: time="2026-04-17T02:40:09.100491725Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 17 02:40:09.100569 containerd[1607]: time="2026-04-17T02:40:09.100500630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 17 02:40:09.100569 containerd[1607]: time="2026-04-17T02:40:09.100517227Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 17 02:40:09.100569 containerd[1607]: time="2026-04-17T02:40:09.100566680Z" level=info msg="runtime interface created" Apr 17 02:40:09.100608 containerd[1607]: time="2026-04-17T02:40:09.100572598Z" level=info msg="created NRI interface" Apr 17 02:40:09.100608 containerd[1607]: time="2026-04-17T02:40:09.100579792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 17 02:40:09.100608 containerd[1607]: time="2026-04-17T02:40:09.100594667Z" level=info msg="Connect containerd service" Apr 17 02:40:09.103454 containerd[1607]: time="2026-04-17T02:40:09.100616652Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 02:40:09.101275 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 02:40:09.106364 containerd[1607]: time="2026-04-17T02:40:09.104002837Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 02:40:09.110520 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 02:40:09.179084 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 02:40:09.181697 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 02:40:09.232665 tar[1605]: linux-amd64/README.md Apr 17 02:40:09.258053 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 02:40:09.328749 containerd[1607]: time="2026-04-17T02:40:09.328326256Z" level=info msg="Start subscribing containerd event" Apr 17 02:40:09.328874 containerd[1607]: time="2026-04-17T02:40:09.328662596Z" level=info msg="Start recovering state" Apr 17 02:40:09.329105 containerd[1607]: time="2026-04-17T02:40:09.329069694Z" level=info msg="Start event monitor" Apr 17 02:40:09.329175 containerd[1607]: time="2026-04-17T02:40:09.329134878Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 02:40:09.329193 containerd[1607]: time="2026-04-17T02:40:09.329148440Z" level=info msg="Start cni network conf syncer for default" Apr 17 02:40:09.329224 containerd[1607]: time="2026-04-17T02:40:09.329209137Z" level=info msg="Start streaming server" Apr 17 02:40:09.329239 containerd[1607]: time="2026-04-17T02:40:09.329223721Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 02:40:09.329269 containerd[1607]: time="2026-04-17T02:40:09.329253197Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 17 02:40:09.329298 containerd[1607]: time="2026-04-17T02:40:09.329283491Z" level=info msg="runtime interface starting up..." Apr 17 02:40:09.329334 containerd[1607]: time="2026-04-17T02:40:09.329304820Z" level=info msg="starting plugins..." Apr 17 02:40:09.329397 containerd[1607]: time="2026-04-17T02:40:09.329378281Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 17 02:40:09.329664 containerd[1607]: time="2026-04-17T02:40:09.329640738Z" level=info msg="containerd successfully booted in 0.267668s" Apr 17 02:40:09.329840 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 02:40:09.778332 systemd-networkd[1526]: eth0: Gained IPv6LL Apr 17 02:40:09.897689 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 02:40:09.935828 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 02:40:09.973533 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 02:40:09.986974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:40:10.003711 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 02:40:10.027663 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 02:40:10.028039 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 02:40:10.030960 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 02:40:10.037913 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 02:40:12.212063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:40:12.215015 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 02:40:12.217165 systemd[1]: Startup finished in 3.012s (kernel) + 5.679s (initrd) + 7.261s (userspace) = 15.953s. Apr 17 02:40:12.226528 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 02:40:12.519901 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 02:40:12.522776 systemd[1]: Started sshd@0-10.0.0.8:22-10.0.0.1:57522.service - OpenSSH per-connection server daemon (10.0.0.1:57522). Apr 17 02:40:12.624469 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 57522 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:40:12.626182 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:40:12.635722 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 02:40:12.636592 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 02:40:12.641956 systemd-logind[1598]: New session 1 of user core. Apr 17 02:40:12.662432 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 02:40:12.666238 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 02:40:12.680710 (systemd)[1735]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 02:40:12.685340 systemd-logind[1598]: New session c1 of user core. Apr 17 02:40:12.941642 systemd[1735]: Queued start job for default target default.target. Apr 17 02:40:12.949471 systemd[1735]: Created slice app.slice - User Application Slice. Apr 17 02:40:12.949497 systemd[1735]: Reached target paths.target - Paths. Apr 17 02:40:12.949551 systemd[1735]: Reached target timers.target - Timers. Apr 17 02:40:12.950699 systemd[1735]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 02:40:12.964328 systemd[1735]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 02:40:12.964430 systemd[1735]: Reached target sockets.target - Sockets. Apr 17 02:40:12.964458 systemd[1735]: Reached target basic.target - Basic System. Apr 17 02:40:12.964478 systemd[1735]: Reached target default.target - Main User Target. Apr 17 02:40:12.964498 systemd[1735]: Startup finished in 268ms. Apr 17 02:40:12.965204 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 02:40:12.972095 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 02:40:13.002543 systemd[1]: Started sshd@1-10.0.0.8:22-10.0.0.1:57528.service - OpenSSH per-connection server daemon (10.0.0.1:57528). Apr 17 02:40:13.061992 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 57528 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:40:13.063292 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:40:13.068611 systemd-logind[1598]: New session 2 of user core. Apr 17 02:40:13.080489 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 02:40:13.094861 sshd[1749]: Connection closed by 10.0.0.1 port 57528 Apr 17 02:40:13.155751 kubelet[1717]: E0417 02:40:13.152450 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 02:40:13.155497 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Apr 17 02:40:13.165314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 02:40:13.165426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 02:40:13.165664 systemd[1]: kubelet.service: Consumed 2.844s CPU time, 266.1M memory peak. Apr 17 02:40:13.165968 systemd[1]: sshd@1-10.0.0.8:22-10.0.0.1:57528.service: Deactivated successfully. Apr 17 02:40:13.168113 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 02:40:13.169346 systemd-logind[1598]: Session 2 logged out. Waiting for processes to exit. Apr 17 02:40:13.171158 systemd[1]: Started sshd@2-10.0.0.8:22-10.0.0.1:57542.service - OpenSSH per-connection server daemon (10.0.0.1:57542). Apr 17 02:40:13.171948 systemd-logind[1598]: Removed session 2. Apr 17 02:40:13.328129 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 57542 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:40:13.348987 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:40:13.357112 systemd-logind[1598]: New session 3 of user core. Apr 17 02:40:13.369095 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 02:40:13.378893 sshd[1759]: Connection closed by 10.0.0.1 port 57542 Apr 17 02:40:13.379517 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Apr 17 02:40:13.401806 systemd[1]: sshd@2-10.0.0.8:22-10.0.0.1:57542.service: Deactivated successfully. Apr 17 02:40:13.403177 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 02:40:13.405043 systemd-logind[1598]: Session 3 logged out. Waiting for processes to exit. Apr 17 02:40:13.412110 systemd[1]: Started sshd@3-10.0.0.8:22-10.0.0.1:57556.service - OpenSSH per-connection server daemon (10.0.0.1:57556). Apr 17 02:40:13.413022 systemd-logind[1598]: Removed session 3. Apr 17 02:40:13.481431 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 57556 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:40:13.482797 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:40:13.490793 systemd-logind[1598]: New session 4 of user core. Apr 17 02:40:13.500124 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 02:40:13.512244 sshd[1769]: Connection closed by 10.0.0.1 port 57556 Apr 17 02:40:13.512770 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Apr 17 02:40:13.520880 systemd[1]: sshd@3-10.0.0.8:22-10.0.0.1:57556.service: Deactivated successfully. Apr 17 02:40:13.522188 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 02:40:13.523105 systemd-logind[1598]: Session 4 logged out. Waiting for processes to exit. Apr 17 02:40:13.527070 systemd[1]: Started sshd@4-10.0.0.8:22-10.0.0.1:57560.service - OpenSSH per-connection server daemon (10.0.0.1:57560). Apr 17 02:40:13.527535 systemd-logind[1598]: Removed session 4. Apr 17 02:40:13.588830 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 57560 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:40:13.590116 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:40:13.595697 systemd-logind[1598]: New session 5 of user core. Apr 17 02:40:13.603258 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 02:40:13.620543 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 02:40:13.620804 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 02:40:13.636403 sudo[1779]: pam_unix(sudo:session): session closed for user root Apr 17 02:40:13.638209 sshd[1778]: Connection closed by 10.0.0.1 port 57560 Apr 17 02:40:13.638561 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Apr 17 02:40:13.649889 systemd[1]: sshd@4-10.0.0.8:22-10.0.0.1:57560.service: Deactivated successfully. Apr 17 02:40:13.651394 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 02:40:13.652297 systemd-logind[1598]: Session 5 logged out. Waiting for processes to exit. Apr 17 02:40:13.656265 systemd[1]: Started sshd@5-10.0.0.8:22-10.0.0.1:57570.service - OpenSSH per-connection server daemon (10.0.0.1:57570). Apr 17 02:40:13.660674 systemd-logind[1598]: Removed session 5. Apr 17 02:40:13.724120 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 57570 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:40:13.726390 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:40:13.730882 systemd-logind[1598]: New session 6 of user core. Apr 17 02:40:13.737095 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 02:40:13.752558 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 02:40:13.753915 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 02:40:13.761296 sudo[1790]: pam_unix(sudo:session): session closed for user root Apr 17 02:40:13.766543 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 17 02:40:13.766760 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 02:40:13.779392 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 02:40:13.954176 augenrules[1812]: No rules Apr 17 02:40:13.956512 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 02:40:13.957670 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 02:40:13.962546 sudo[1789]: pam_unix(sudo:session): session closed for user root Apr 17 02:40:13.964056 sshd[1788]: Connection closed by 10.0.0.1 port 57570 Apr 17 02:40:13.964376 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Apr 17 02:40:13.972975 systemd[1]: sshd@5-10.0.0.8:22-10.0.0.1:57570.service: Deactivated successfully. Apr 17 02:40:13.974299 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 02:40:13.974989 systemd-logind[1598]: Session 6 logged out. Waiting for processes to exit. Apr 17 02:40:13.977197 systemd[1]: Started sshd@6-10.0.0.8:22-10.0.0.1:57586.service - OpenSSH per-connection server daemon (10.0.0.1:57586). Apr 17 02:40:13.978672 systemd-logind[1598]: Removed session 6. Apr 17 02:40:14.049874 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 57586 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:40:14.052472 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:40:14.065199 systemd-logind[1598]: New session 7 of user core. Apr 17 02:40:14.080307 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 02:40:14.117840 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 02:40:14.119909 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 02:40:15.748903 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 02:40:15.766366 (dockerd)[1846]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 02:40:16.656322 dockerd[1846]: time="2026-04-17T02:40:16.656132871Z" level=info msg="Starting up" Apr 17 02:40:16.657465 dockerd[1846]: time="2026-04-17T02:40:16.657085378Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 17 02:40:16.697620 dockerd[1846]: time="2026-04-17T02:40:16.697417129Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 17 02:40:16.799404 dockerd[1846]: time="2026-04-17T02:40:16.799235144Z" level=info msg="Loading containers: start." Apr 17 02:40:16.815590 kernel: Initializing XFRM netlink socket Apr 17 02:40:17.316351 systemd-networkd[1526]: docker0: Link UP Apr 17 02:40:17.323379 dockerd[1846]: time="2026-04-17T02:40:17.323229342Z" level=info msg="Loading containers: done." Apr 17 02:40:17.342113 dockerd[1846]: time="2026-04-17T02:40:17.342005243Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 02:40:17.342353 dockerd[1846]: time="2026-04-17T02:40:17.342326753Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 17 02:40:17.342470 dockerd[1846]: time="2026-04-17T02:40:17.342456073Z" level=info msg="Initializing buildkit" Apr 17 02:40:17.395817 dockerd[1846]: time="2026-04-17T02:40:17.395594547Z" level=info msg="Completed buildkit initialization" Apr 17 02:40:17.403633 dockerd[1846]: time="2026-04-17T02:40:17.403362880Z" level=info msg="Daemon has completed initialization" Apr 17 02:40:17.405350 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 02:40:17.405857 dockerd[1846]: time="2026-04-17T02:40:17.404250605Z" level=info msg="API listen on /run/docker.sock" Apr 17 02:40:18.898341 containerd[1607]: time="2026-04-17T02:40:18.898013479Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 02:40:20.032638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873628513.mount: Deactivated successfully. Apr 17 02:40:21.479529 containerd[1607]: time="2026-04-17T02:40:21.479355968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:21.481034 containerd[1607]: time="2026-04-17T02:40:21.480949285Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 17 02:40:21.483078 containerd[1607]: time="2026-04-17T02:40:21.482678379Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:21.509441 containerd[1607]: time="2026-04-17T02:40:21.509277511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:21.512303 containerd[1607]: time="2026-04-17T02:40:21.512043957Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.613509485s" Apr 17 02:40:21.512303 containerd[1607]: time="2026-04-17T02:40:21.512208053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 02:40:21.515994 containerd[1607]: time="2026-04-17T02:40:21.515908787Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 02:40:23.085600 containerd[1607]: time="2026-04-17T02:40:23.085332120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:23.087108 containerd[1607]: time="2026-04-17T02:40:23.087071704Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 17 02:40:23.090417 containerd[1607]: time="2026-04-17T02:40:23.090375215Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:23.099959 containerd[1607]: time="2026-04-17T02:40:23.099788555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:23.101219 containerd[1607]: time="2026-04-17T02:40:23.101175546Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.585213593s" Apr 17 02:40:23.101219 containerd[1607]: time="2026-04-17T02:40:23.101219472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 02:40:23.103992 containerd[1607]: time="2026-04-17T02:40:23.103884674Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 02:40:23.409854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 02:40:23.411531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:40:23.990800 containerd[1607]: time="2026-04-17T02:40:23.990691377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:23.991409 containerd[1607]: time="2026-04-17T02:40:23.991360147Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 17 02:40:23.992663 containerd[1607]: time="2026-04-17T02:40:23.992619091Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:23.996170 containerd[1607]: time="2026-04-17T02:40:23.996059673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:23.997508 containerd[1607]: time="2026-04-17T02:40:23.997469138Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 893.491708ms" Apr 17 02:40:23.997572 containerd[1607]: time="2026-04-17T02:40:23.997512600Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 02:40:24.000288 containerd[1607]: time="2026-04-17T02:40:24.000260773Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 02:40:24.330437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:40:24.342212 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 02:40:24.409393 kubelet[2140]: E0417 02:40:24.409176 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 02:40:24.412869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 02:40:24.413017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 02:40:24.413368 systemd[1]: kubelet.service: Consumed 911ms CPU time, 110.2M memory peak. Apr 17 02:40:24.842817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735471088.mount: Deactivated successfully. Apr 17 02:40:25.235821 containerd[1607]: time="2026-04-17T02:40:25.235632933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:25.236399 containerd[1607]: time="2026-04-17T02:40:25.236180481Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 17 02:40:25.237155 containerd[1607]: time="2026-04-17T02:40:25.237103939Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:25.239890 containerd[1607]: time="2026-04-17T02:40:25.239750642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:25.240702 containerd[1607]: time="2026-04-17T02:40:25.240666316Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.240372529s" Apr 17 02:40:25.240702 containerd[1607]: time="2026-04-17T02:40:25.240701808Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 02:40:25.242382 containerd[1607]: time="2026-04-17T02:40:25.242162916Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 02:40:25.683111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029673362.mount: Deactivated successfully. Apr 17 02:40:26.280588 containerd[1607]: time="2026-04-17T02:40:26.280435021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:26.281388 containerd[1607]: time="2026-04-17T02:40:26.281076622Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 17 02:40:26.282222 containerd[1607]: time="2026-04-17T02:40:26.282170917Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:26.284582 containerd[1607]: time="2026-04-17T02:40:26.284519122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:26.285359 containerd[1607]: time="2026-04-17T02:40:26.285316276Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.043128423s" Apr 17 02:40:26.285421 containerd[1607]: time="2026-04-17T02:40:26.285361168Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 02:40:26.288758 containerd[1607]: time="2026-04-17T02:40:26.288684821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 02:40:26.762290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274174707.mount: Deactivated successfully. Apr 17 02:40:26.780264 containerd[1607]: time="2026-04-17T02:40:26.779242049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 02:40:26.780264 containerd[1607]: time="2026-04-17T02:40:26.779571544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 17 02:40:26.780759 containerd[1607]: time="2026-04-17T02:40:26.780695286Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 02:40:26.782617 containerd[1607]: time="2026-04-17T02:40:26.782566133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 02:40:26.783737 containerd[1607]: time="2026-04-17T02:40:26.783639995Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 494.917207ms" Apr 17 02:40:26.783737 containerd[1607]: time="2026-04-17T02:40:26.783699336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 02:40:26.785088 containerd[1607]: time="2026-04-17T02:40:26.785057897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 02:40:27.259126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104408790.mount: Deactivated successfully. Apr 17 02:40:28.278368 containerd[1607]: time="2026-04-17T02:40:28.278013064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:28.279012 containerd[1607]: time="2026-04-17T02:40:28.278449444Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 17 02:40:28.279850 containerd[1607]: time="2026-04-17T02:40:28.279794747Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:28.282038 containerd[1607]: time="2026-04-17T02:40:28.281977092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:28.282827 containerd[1607]: time="2026-04-17T02:40:28.282785346Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.497687751s" Apr 17 02:40:28.282827 containerd[1607]: time="2026-04-17T02:40:28.282824429Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 02:40:31.469065 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:40:31.469207 systemd[1]: kubelet.service: Consumed 911ms CPU time, 110.2M memory peak. Apr 17 02:40:31.471472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:40:31.496837 systemd[1]: Reload requested from client PID 2306 ('systemctl') (unit session-7.scope)... Apr 17 02:40:31.496862 systemd[1]: Reloading... Apr 17 02:40:31.564588 zram_generator::config[2348]: No configuration found. Apr 17 02:40:31.736960 systemd[1]: Reloading finished in 239 ms. Apr 17 02:40:31.803363 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 02:40:31.803469 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 02:40:31.804283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:40:31.804510 systemd[1]: kubelet.service: Consumed 103ms CPU time, 98.1M memory peak. Apr 17 02:40:31.806364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:40:31.979198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:40:31.986508 (kubelet)[2395]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 02:40:32.030106 kubelet[2395]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 02:40:32.030106 kubelet[2395]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 02:40:32.030106 kubelet[2395]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 02:40:32.030589 kubelet[2395]: I0417 02:40:32.030189 2395 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 02:40:32.439772 kubelet[2395]: I0417 02:40:32.439719 2395 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 02:40:32.439772 kubelet[2395]: I0417 02:40:32.439762 2395 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 02:40:32.440299 kubelet[2395]: I0417 02:40:32.440267 2395 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 02:40:32.469170 kubelet[2395]: E0417 02:40:32.469088 2395 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 02:40:32.471632 kubelet[2395]: I0417 02:40:32.471578 2395 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 02:40:32.492398 kubelet[2395]: I0417 02:40:32.492313 2395 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 02:40:32.497159 kubelet[2395]: I0417 02:40:32.497087 2395 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 02:40:32.497864 kubelet[2395]: I0417 02:40:32.497802 2395 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 02:40:32.498158 kubelet[2395]: I0417 02:40:32.497839 2395 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 02:40:32.498292 kubelet[2395]: I0417 02:40:32.498177 2395 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 02:40:32.498292 kubelet[2395]: I0417 02:40:32.498186 2395 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 02:40:32.498435 kubelet[2395]: I0417 02:40:32.498415 2395 state_mem.go:36] "Initialized new in-memory state store" Apr 17 02:40:32.501616 kubelet[2395]: I0417 02:40:32.501570 2395 kubelet.go:480] "Attempting to sync node with API server" Apr 17 02:40:32.501616 kubelet[2395]: I0417 02:40:32.501602 2395 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 02:40:32.501715 kubelet[2395]: I0417 02:40:32.501702 2395 kubelet.go:386] "Adding apiserver pod source" Apr 17 02:40:32.503079 kubelet[2395]: I0417 02:40:32.502872 2395 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 02:40:32.505812 kubelet[2395]: E0417 02:40:32.505785 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 02:40:32.506022 kubelet[2395]: E0417 02:40:32.505794 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 02:40:32.506967 kubelet[2395]: I0417 02:40:32.506907 2395 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 02:40:32.509532 kubelet[2395]: I0417 02:40:32.508142 2395 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 02:40:32.509532 kubelet[2395]: W0417 02:40:32.508705 2395 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 02:40:32.512420 kubelet[2395]: I0417 02:40:32.512374 2395 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 02:40:32.512500 kubelet[2395]: I0417 02:40:32.512426 2395 server.go:1289] "Started kubelet" Apr 17 02:40:32.514442 kubelet[2395]: I0417 02:40:32.514419 2395 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 02:40:32.515380 kubelet[2395]: I0417 02:40:32.514776 2395 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 02:40:32.515380 kubelet[2395]: I0417 02:40:32.515184 2395 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 02:40:32.515380 kubelet[2395]: I0417 02:40:32.515339 2395 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 02:40:32.518401 kubelet[2395]: I0417 02:40:32.518318 2395 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 02:40:32.519500 kubelet[2395]: I0417 02:40:32.519448 2395 server.go:317] "Adding debug handlers to kubelet server" Apr 17 02:40:32.520749 kubelet[2395]: I0417 02:40:32.520577 2395 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 02:40:32.520749 kubelet[2395]: E0417 02:40:32.520722 2395 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 02:40:32.521324 kubelet[2395]: E0417 02:40:32.521233 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="200ms" Apr 17 02:40:32.522235 kubelet[2395]: I0417 02:40:32.522046 2395 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 02:40:32.522777 kubelet[2395]: I0417 02:40:32.522604 2395 reconciler.go:26] "Reconciler: start to sync state" Apr 17 02:40:32.522815 kubelet[2395]: E0417 02:40:32.521335 2395 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a70498d72a9a10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 02:40:32.512383504 +0000 UTC m=+0.521938866,LastTimestamp:2026-04-17 02:40:32.512383504 +0000 UTC m=+0.521938866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 02:40:32.524623 kubelet[2395]: E0417 02:40:32.524565 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 02:40:32.526002 kubelet[2395]: I0417 02:40:32.525957 2395 factory.go:223] Registration of the containerd container factory successfully Apr 17 02:40:32.526002 kubelet[2395]: I0417 02:40:32.525978 2395 factory.go:223] Registration of the systemd container factory successfully Apr 17 02:40:32.526089 kubelet[2395]: I0417 02:40:32.526057 2395 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 02:40:32.526563 kubelet[2395]: E0417 02:40:32.526502 2395 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 02:40:32.541152 kubelet[2395]: I0417 02:40:32.541087 2395 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 02:40:32.541152 kubelet[2395]: I0417 02:40:32.541104 2395 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 02:40:32.541152 kubelet[2395]: I0417 02:40:32.541135 2395 state_mem.go:36] "Initialized new in-memory state store" Apr 17 02:40:32.542982 kubelet[2395]: I0417 02:40:32.542889 2395 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 02:40:32.544406 kubelet[2395]: I0417 02:40:32.544377 2395 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 02:40:32.544747 kubelet[2395]: I0417 02:40:32.544519 2395 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 02:40:32.544747 kubelet[2395]: I0417 02:40:32.544570 2395 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 02:40:32.544747 kubelet[2395]: I0417 02:40:32.544585 2395 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 02:40:32.544747 kubelet[2395]: E0417 02:40:32.544614 2395 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 02:40:32.545221 kubelet[2395]: E0417 02:40:32.545183 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 02:40:32.581325 kubelet[2395]: I0417 02:40:32.581078 2395 policy_none.go:49] "None policy: Start" Apr 17 02:40:32.581325 kubelet[2395]: I0417 02:40:32.581412 2395 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 02:40:32.582177 kubelet[2395]: I0417 02:40:32.581556 2395 state_mem.go:35] "Initializing new in-memory state store" Apr 17 02:40:32.593015 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 02:40:32.609498 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 02:40:32.612566 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 02:40:32.621779 kubelet[2395]: E0417 02:40:32.621605 2395 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 02:40:32.626330 kubelet[2395]: E0417 02:40:32.626231 2395 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 02:40:32.626595 kubelet[2395]: I0417 02:40:32.626548 2395 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 02:40:32.626595 kubelet[2395]: I0417 02:40:32.626557 2395 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 02:40:32.626947 kubelet[2395]: I0417 02:40:32.626891 2395 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 02:40:32.628830 kubelet[2395]: E0417 02:40:32.628811 2395 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 02:40:32.628879 kubelet[2395]: E0417 02:40:32.628838 2395 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 02:40:32.659295 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 17 02:40:32.679312 kubelet[2395]: E0417 02:40:32.679212 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:40:32.683267 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 17 02:40:32.686736 kubelet[2395]: E0417 02:40:32.685211 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:40:32.699624 systemd[1]: Created slice kubepods-burstable-podd7143cfecd2db762837c693fdee46a73.slice - libcontainer container kubepods-burstable-podd7143cfecd2db762837c693fdee46a73.slice. Apr 17 02:40:32.701876 kubelet[2395]: E0417 02:40:32.701832 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:40:32.723486 kubelet[2395]: E0417 02:40:32.723369 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="400ms" Apr 17 02:40:32.725539 kubelet[2395]: I0417 02:40:32.725502 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:32.725539 kubelet[2395]: I0417 02:40:32.725536 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:32.725632 kubelet[2395]: I0417 02:40:32.725556 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 17 02:40:32.725632 kubelet[2395]: I0417 02:40:32.725568 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7143cfecd2db762837c693fdee46a73-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7143cfecd2db762837c693fdee46a73\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:32.725632 kubelet[2395]: I0417 02:40:32.725584 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:32.725632 kubelet[2395]: I0417 02:40:32.725611 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:32.725772 kubelet[2395]: I0417 02:40:32.725652 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:32.725772 kubelet[2395]: I0417 02:40:32.725703 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7143cfecd2db762837c693fdee46a73-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7143cfecd2db762837c693fdee46a73\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:32.725772 kubelet[2395]: I0417 02:40:32.725728 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7143cfecd2db762837c693fdee46a73-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d7143cfecd2db762837c693fdee46a73\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:32.732585 kubelet[2395]: I0417 02:40:32.732525 2395 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 02:40:32.732981 kubelet[2395]: E0417 02:40:32.732913 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Apr 17 02:40:32.937955 kubelet[2395]: I0417 02:40:32.937857 2395 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 02:40:32.938403 kubelet[2395]: E0417 02:40:32.938356 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Apr 17 02:40:32.983402 kubelet[2395]: E0417 02:40:32.982412 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:32.985898 kubelet[2395]: E0417 02:40:32.985873 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:32.986963 containerd[1607]: time="2026-04-17T02:40:32.986891804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 17 02:40:32.987205 containerd[1607]: time="2026-04-17T02:40:32.986899168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 17 02:40:33.003248 kubelet[2395]: E0417 02:40:33.003149 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:33.004531 containerd[1607]: time="2026-04-17T02:40:33.004417031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d7143cfecd2db762837c693fdee46a73,Namespace:kube-system,Attempt:0,}" Apr 17 02:40:33.031645 containerd[1607]: time="2026-04-17T02:40:33.031563471Z" level=info msg="connecting to shim a7f104e4ab7e430f5399b13c0f907eb374965182e31fa06fbae4b92cfca0d494" address="unix:///run/containerd/s/b96fadfb73add78750f1530ce0b6cb00dfb2c9458bc6c8196b9ddf2285e1be2e" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:40:33.034951 containerd[1607]: time="2026-04-17T02:40:33.034791044Z" level=info msg="connecting to shim 81559ea729e739926b0a6d2e8355ae2ff37e9606801b731fba2a561b22d58304" address="unix:///run/containerd/s/6408a42332a48b1b1448088327bd73f9f48a0b7d991059ec1389ddd37caa7d3c" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:40:33.044903 containerd[1607]: time="2026-04-17T02:40:33.044878450Z" level=info msg="connecting to shim 861f4d88bd0ba4ab1a742c5dc5bc90adeb2204f2ed5370b6952dee3b53ba67f2" address="unix:///run/containerd/s/1e79a172d60612b5a08ffbd90b67f2d79792db878ddbe2b92dc8a470d0852ca7" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:40:33.064146 systemd[1]: Started cri-containerd-81559ea729e739926b0a6d2e8355ae2ff37e9606801b731fba2a561b22d58304.scope - libcontainer container 81559ea729e739926b0a6d2e8355ae2ff37e9606801b731fba2a561b22d58304. Apr 17 02:40:33.068650 systemd[1]: Started cri-containerd-861f4d88bd0ba4ab1a742c5dc5bc90adeb2204f2ed5370b6952dee3b53ba67f2.scope - libcontainer container 861f4d88bd0ba4ab1a742c5dc5bc90adeb2204f2ed5370b6952dee3b53ba67f2. Apr 17 02:40:33.069434 systemd[1]: Started cri-containerd-a7f104e4ab7e430f5399b13c0f907eb374965182e31fa06fbae4b92cfca0d494.scope - libcontainer container a7f104e4ab7e430f5399b13c0f907eb374965182e31fa06fbae4b92cfca0d494. Apr 17 02:40:33.124751 kubelet[2395]: E0417 02:40:33.124604 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="800ms" Apr 17 02:40:33.127430 containerd[1607]: time="2026-04-17T02:40:33.127392952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"81559ea729e739926b0a6d2e8355ae2ff37e9606801b731fba2a561b22d58304\"" Apr 17 02:40:33.131649 containerd[1607]: time="2026-04-17T02:40:33.131560612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7f104e4ab7e430f5399b13c0f907eb374965182e31fa06fbae4b92cfca0d494\"" Apr 17 02:40:33.132291 containerd[1607]: time="2026-04-17T02:40:33.132223840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d7143cfecd2db762837c693fdee46a73,Namespace:kube-system,Attempt:0,} returns sandbox id \"861f4d88bd0ba4ab1a742c5dc5bc90adeb2204f2ed5370b6952dee3b53ba67f2\"" Apr 17 02:40:33.132982 kubelet[2395]: E0417 02:40:33.132917 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:33.133267 kubelet[2395]: E0417 02:40:33.133232 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:33.133801 kubelet[2395]: E0417 02:40:33.133781 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:33.170972 containerd[1607]: time="2026-04-17T02:40:33.170877372Z" level=info msg="CreateContainer within sandbox \"861f4d88bd0ba4ab1a742c5dc5bc90adeb2204f2ed5370b6952dee3b53ba67f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 02:40:33.170972 containerd[1607]: time="2026-04-17T02:40:33.170899141Z" level=info msg="CreateContainer within sandbox \"a7f104e4ab7e430f5399b13c0f907eb374965182e31fa06fbae4b92cfca0d494\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 02:40:33.171574 containerd[1607]: time="2026-04-17T02:40:33.171547586Z" level=info msg="CreateContainer within sandbox \"81559ea729e739926b0a6d2e8355ae2ff37e9606801b731fba2a561b22d58304\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 02:40:33.191978 containerd[1607]: time="2026-04-17T02:40:33.191852652Z" level=info msg="Container 83b75df91237101ccedb6e29e4ed8a0522974c1a7e6fb53cf53ac3e25867eade: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:40:33.195081 containerd[1607]: time="2026-04-17T02:40:33.194877529Z" level=info msg="Container 3faeb553ebb103c38107cb1a394e2a2983709c462ef99a0f3abef149637a41ac: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:40:33.200035 containerd[1607]: time="2026-04-17T02:40:33.199114365Z" level=info msg="Container 57f745b2ba4279334a3e1df3c36ffcdab7b3bcb16a18ba8005ed14ef85da2055: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:40:33.204336 containerd[1607]: time="2026-04-17T02:40:33.204304966Z" level=info msg="CreateContainer within sandbox \"a7f104e4ab7e430f5399b13c0f907eb374965182e31fa06fbae4b92cfca0d494\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"83b75df91237101ccedb6e29e4ed8a0522974c1a7e6fb53cf53ac3e25867eade\"" Apr 17 02:40:33.205710 containerd[1607]: time="2026-04-17T02:40:33.205638220Z" level=info msg="CreateContainer within sandbox \"861f4d88bd0ba4ab1a742c5dc5bc90adeb2204f2ed5370b6952dee3b53ba67f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3faeb553ebb103c38107cb1a394e2a2983709c462ef99a0f3abef149637a41ac\"" Apr 17 02:40:33.206233 containerd[1607]: time="2026-04-17T02:40:33.206194292Z" level=info msg="StartContainer for \"83b75df91237101ccedb6e29e4ed8a0522974c1a7e6fb53cf53ac3e25867eade\"" Apr 17 02:40:33.206270 containerd[1607]: time="2026-04-17T02:40:33.206244084Z" level=info msg="StartContainer for \"3faeb553ebb103c38107cb1a394e2a2983709c462ef99a0f3abef149637a41ac\"" Apr 17 02:40:33.207301 containerd[1607]: time="2026-04-17T02:40:33.207273701Z" level=info msg="connecting to shim 83b75df91237101ccedb6e29e4ed8a0522974c1a7e6fb53cf53ac3e25867eade" address="unix:///run/containerd/s/b96fadfb73add78750f1530ce0b6cb00dfb2c9458bc6c8196b9ddf2285e1be2e" protocol=ttrpc version=3 Apr 17 02:40:33.207365 containerd[1607]: time="2026-04-17T02:40:33.207278444Z" level=info msg="connecting to shim 3faeb553ebb103c38107cb1a394e2a2983709c462ef99a0f3abef149637a41ac" address="unix:///run/containerd/s/1e79a172d60612b5a08ffbd90b67f2d79792db878ddbe2b92dc8a470d0852ca7" protocol=ttrpc version=3 Apr 17 02:40:33.209196 containerd[1607]: time="2026-04-17T02:40:33.209165875Z" level=info msg="CreateContainer within sandbox \"81559ea729e739926b0a6d2e8355ae2ff37e9606801b731fba2a561b22d58304\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"57f745b2ba4279334a3e1df3c36ffcdab7b3bcb16a18ba8005ed14ef85da2055\"" Apr 17 02:40:33.209619 containerd[1607]: time="2026-04-17T02:40:33.209578572Z" level=info msg="StartContainer for \"57f745b2ba4279334a3e1df3c36ffcdab7b3bcb16a18ba8005ed14ef85da2055\"" Apr 17 02:40:33.213399 containerd[1607]: time="2026-04-17T02:40:33.213300192Z" level=info msg="connecting to shim 57f745b2ba4279334a3e1df3c36ffcdab7b3bcb16a18ba8005ed14ef85da2055" address="unix:///run/containerd/s/6408a42332a48b1b1448088327bd73f9f48a0b7d991059ec1389ddd37caa7d3c" protocol=ttrpc version=3 Apr 17 02:40:33.228664 systemd[1]: Started cri-containerd-3faeb553ebb103c38107cb1a394e2a2983709c462ef99a0f3abef149637a41ac.scope - libcontainer container 3faeb553ebb103c38107cb1a394e2a2983709c462ef99a0f3abef149637a41ac. Apr 17 02:40:33.241110 systemd[1]: Started cri-containerd-83b75df91237101ccedb6e29e4ed8a0522974c1a7e6fb53cf53ac3e25867eade.scope - libcontainer container 83b75df91237101ccedb6e29e4ed8a0522974c1a7e6fb53cf53ac3e25867eade. Apr 17 02:40:33.253787 systemd[1]: Started cri-containerd-57f745b2ba4279334a3e1df3c36ffcdab7b3bcb16a18ba8005ed14ef85da2055.scope - libcontainer container 57f745b2ba4279334a3e1df3c36ffcdab7b3bcb16a18ba8005ed14ef85da2055. Apr 17 02:40:33.321810 containerd[1607]: time="2026-04-17T02:40:33.321724906Z" level=info msg="StartContainer for \"3faeb553ebb103c38107cb1a394e2a2983709c462ef99a0f3abef149637a41ac\" returns successfully" Apr 17 02:40:33.322988 containerd[1607]: time="2026-04-17T02:40:33.322622829Z" level=info msg="StartContainer for \"83b75df91237101ccedb6e29e4ed8a0522974c1a7e6fb53cf53ac3e25867eade\" returns successfully" Apr 17 02:40:33.326807 containerd[1607]: time="2026-04-17T02:40:33.326787634Z" level=info msg="StartContainer for \"57f745b2ba4279334a3e1df3c36ffcdab7b3bcb16a18ba8005ed14ef85da2055\" returns successfully" Apr 17 02:40:33.341029 kubelet[2395]: I0417 02:40:33.340917 2395 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 02:40:33.343658 kubelet[2395]: E0417 02:40:33.343589 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Apr 17 02:40:33.570098 kubelet[2395]: E0417 02:40:33.567595 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:40:33.570098 kubelet[2395]: E0417 02:40:33.568093 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:33.570098 kubelet[2395]: E0417 02:40:33.570063 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:40:33.570422 kubelet[2395]: E0417 02:40:33.570181 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:33.572364 kubelet[2395]: E0417 02:40:33.572352 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:40:33.574176 kubelet[2395]: E0417 02:40:33.573799 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:34.147007 kubelet[2395]: I0417 02:40:34.146950 2395 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 02:40:34.504979 kubelet[2395]: E0417 02:40:34.504815 2395 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 02:40:34.504979 kubelet[2395]: I0417 02:40:34.504989 2395 apiserver.go:52] "Watching apiserver" Apr 17 02:40:34.522424 kubelet[2395]: I0417 02:40:34.522372 2395 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 02:40:34.578951 kubelet[2395]: E0417 02:40:34.578862 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:40:34.578951 kubelet[2395]: E0417 02:40:34.578873 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 02:40:34.579362 kubelet[2395]: E0417 02:40:34.579049 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:34.579362 kubelet[2395]: E0417 02:40:34.579055 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:34.694875 kubelet[2395]: I0417 02:40:34.694396 2395 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 02:40:34.694875 kubelet[2395]: E0417 02:40:34.694948 2395 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 02:40:34.722472 kubelet[2395]: I0417 02:40:34.722152 2395 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 02:40:34.732382 kubelet[2395]: E0417 02:40:34.732297 2395 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 02:40:34.732382 kubelet[2395]: I0417 02:40:34.732345 2395 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:34.736546 kubelet[2395]: E0417 02:40:34.736352 2395 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:34.736546 kubelet[2395]: I0417 02:40:34.736371 2395 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:34.739193 kubelet[2395]: E0417 02:40:34.739172 2395 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:35.582067 kubelet[2395]: I0417 02:40:35.581979 2395 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:35.592910 kubelet[2395]: E0417 02:40:35.592800 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:36.437744 systemd[1]: Reload requested from client PID 2686 ('systemctl') (unit session-7.scope)... Apr 17 02:40:36.437765 systemd[1]: Reloading... Apr 17 02:40:36.498974 zram_generator::config[2730]: No configuration found. Apr 17 02:40:36.591007 kubelet[2395]: E0417 02:40:36.590669 2395 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:36.667388 systemd[1]: Reloading finished in 229 ms. Apr 17 02:40:36.692645 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:40:36.718155 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 02:40:36.718357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:40:36.718605 systemd[1]: kubelet.service: Consumed 1.236s CPU time, 132.6M memory peak. Apr 17 02:40:36.720802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 02:40:36.893477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 02:40:36.897074 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 02:40:36.959815 kubelet[2774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 02:40:36.959815 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 02:40:36.959815 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 02:40:36.959815 kubelet[2774]: I0417 02:40:36.959540 2774 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 02:40:36.971684 kubelet[2774]: I0417 02:40:36.971528 2774 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 02:40:36.971684 kubelet[2774]: I0417 02:40:36.971590 2774 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 02:40:36.972804 kubelet[2774]: I0417 02:40:36.972786 2774 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 02:40:36.978213 kubelet[2774]: I0417 02:40:36.977896 2774 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 02:40:36.983206 kubelet[2774]: I0417 02:40:36.983060 2774 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 02:40:36.989088 kubelet[2774]: I0417 02:40:36.989065 2774 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 02:40:36.993263 kubelet[2774]: I0417 02:40:36.993243 2774 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 02:40:36.993462 kubelet[2774]: I0417 02:40:36.993412 2774 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 02:40:36.993577 kubelet[2774]: I0417 02:40:36.993451 2774 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 02:40:36.993679 kubelet[2774]: I0417 02:40:36.993579 2774 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 02:40:36.993679 kubelet[2774]: I0417 02:40:36.993586 2774 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 02:40:36.993679 kubelet[2774]: I0417 02:40:36.993637 2774 state_mem.go:36] "Initialized new in-memory state store" Apr 17 02:40:36.993821 kubelet[2774]: I0417 02:40:36.993802 2774 kubelet.go:480] "Attempting to sync node with API server" Apr 17 02:40:36.993845 kubelet[2774]: I0417 02:40:36.993822 2774 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 02:40:36.993845 kubelet[2774]: I0417 02:40:36.993841 2774 kubelet.go:386] "Adding apiserver pod source" Apr 17 02:40:36.993885 kubelet[2774]: I0417 02:40:36.993851 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 02:40:37.000624 kubelet[2774]: I0417 02:40:37.000530 2774 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 02:40:37.001191 kubelet[2774]: I0417 02:40:37.001165 2774 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 02:40:37.007766 kubelet[2774]: I0417 02:40:37.007137 2774 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 02:40:37.007766 kubelet[2774]: I0417 02:40:37.007314 2774 server.go:1289] "Started kubelet" Apr 17 02:40:37.009012 kubelet[2774]: I0417 02:40:37.008795 2774 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 02:40:37.011916 kubelet[2774]: I0417 02:40:37.011778 2774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 02:40:37.012983 kubelet[2774]: I0417 02:40:37.012972 2774 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 02:40:37.013553 kubelet[2774]: I0417 02:40:37.013195 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 02:40:37.017382 kubelet[2774]: I0417 02:40:37.017331 2774 server.go:317] "Adding debug handlers to kubelet server" Apr 17 02:40:37.019208 kubelet[2774]: I0417 02:40:37.019156 2774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 02:40:37.024410 kubelet[2774]: I0417 02:40:37.024332 2774 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 02:40:37.024410 kubelet[2774]: I0417 02:40:37.024440 2774 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 02:40:37.024590 kubelet[2774]: I0417 02:40:37.024531 2774 reconciler.go:26] "Reconciler: start to sync state" Apr 17 02:40:37.032395 kubelet[2774]: I0417 02:40:37.032225 2774 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 02:40:37.037600 kubelet[2774]: I0417 02:40:37.037547 2774 factory.go:223] Registration of the containerd container factory successfully Apr 17 02:40:37.037600 kubelet[2774]: I0417 02:40:37.037574 2774 factory.go:223] Registration of the systemd container factory successfully Apr 17 02:40:37.040063 kubelet[2774]: I0417 02:40:37.040012 2774 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 02:40:37.041197 kubelet[2774]: I0417 02:40:37.041159 2774 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 02:40:37.041691 kubelet[2774]: I0417 02:40:37.041576 2774 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 02:40:37.041691 kubelet[2774]: I0417 02:40:37.041693 2774 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 02:40:37.041691 kubelet[2774]: I0417 02:40:37.041744 2774 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 02:40:37.042099 kubelet[2774]: E0417 02:40:37.041868 2774 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 02:40:37.084121 kubelet[2774]: I0417 02:40:37.084025 2774 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 02:40:37.084121 kubelet[2774]: I0417 02:40:37.084050 2774 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 02:40:37.084121 kubelet[2774]: I0417 02:40:37.084074 2774 state_mem.go:36] "Initialized new in-memory state store" Apr 17 02:40:37.084468 kubelet[2774]: I0417 02:40:37.084278 2774 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 02:40:37.084468 kubelet[2774]: I0417 02:40:37.084285 2774 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 02:40:37.084468 kubelet[2774]: I0417 02:40:37.084342 2774 policy_none.go:49] "None policy: Start" Apr 17 02:40:37.084468 kubelet[2774]: I0417 02:40:37.084385 2774 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 02:40:37.084468 kubelet[2774]: I0417 02:40:37.084393 2774 state_mem.go:35] "Initializing new in-memory state store" Apr 17 02:40:37.084543 kubelet[2774]: I0417 02:40:37.084481 2774 state_mem.go:75] "Updated machine memory state" Apr 17 02:40:37.090114 kubelet[2774]: E0417 02:40:37.090029 2774 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 02:40:37.090295 kubelet[2774]: I0417 02:40:37.090230 2774 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 02:40:37.090295 kubelet[2774]: I0417 02:40:37.090239 2774 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 02:40:37.090602 kubelet[2774]: I0417 02:40:37.090565 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 02:40:37.092753 kubelet[2774]: E0417 02:40:37.092679 2774 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 02:40:37.145078 kubelet[2774]: I0417 02:40:37.144954 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 02:40:37.145078 kubelet[2774]: I0417 02:40:37.145138 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:37.145078 kubelet[2774]: I0417 02:40:37.145171 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:37.155233 kubelet[2774]: E0417 02:40:37.154967 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:37.203881 kubelet[2774]: I0417 02:40:37.203706 2774 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 02:40:37.218606 kubelet[2774]: I0417 02:40:37.218196 2774 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 17 02:40:37.218606 kubelet[2774]: I0417 02:40:37.218577 2774 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 02:40:37.326988 kubelet[2774]: I0417 02:40:37.326636 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7143cfecd2db762837c693fdee46a73-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7143cfecd2db762837c693fdee46a73\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:37.326988 kubelet[2774]: I0417 02:40:37.326897 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7143cfecd2db762837c693fdee46a73-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7143cfecd2db762837c693fdee46a73\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:37.326988 kubelet[2774]: I0417 02:40:37.327009 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:37.326988 kubelet[2774]: I0417 02:40:37.327106 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:37.326988 kubelet[2774]: I0417 02:40:37.327134 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 17 02:40:37.328046 kubelet[2774]: I0417 02:40:37.327156 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7143cfecd2db762837c693fdee46a73-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d7143cfecd2db762837c693fdee46a73\") " pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:37.328046 kubelet[2774]: I0417 02:40:37.327176 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:37.328046 kubelet[2774]: I0417 02:40:37.327233 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:37.328046 kubelet[2774]: I0417 02:40:37.327299 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 02:40:37.757940 kubelet[2774]: E0417 02:40:37.757756 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:37.757940 kubelet[2774]: E0417 02:40:37.757780 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:37.757940 kubelet[2774]: E0417 02:40:37.758010 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:37.996206 kubelet[2774]: I0417 02:40:37.996069 2774 apiserver.go:52] "Watching apiserver" Apr 17 02:40:38.025753 kubelet[2774]: I0417 02:40:38.025210 2774 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 02:40:38.064792 kubelet[2774]: E0417 02:40:38.064674 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:38.065209 kubelet[2774]: E0417 02:40:38.065183 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:38.065320 kubelet[2774]: I0417 02:40:38.065282 2774 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:38.075042 kubelet[2774]: E0417 02:40:38.074952 2774 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 02:40:38.075633 kubelet[2774]: E0417 02:40:38.075303 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:38.149843 kubelet[2774]: I0417 02:40:38.149695 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.149635401 podStartE2EDuration="1.149635401s" podCreationTimestamp="2026-04-17 02:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:40:38.139949407 +0000 UTC m=+1.237067589" watchObservedRunningTime="2026-04-17 02:40:38.149635401 +0000 UTC m=+1.246753583" Apr 17 02:40:38.163525 kubelet[2774]: I0417 02:40:38.163242 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.163177791 podStartE2EDuration="3.163177791s" podCreationTimestamp="2026-04-17 02:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:40:38.150156367 +0000 UTC m=+1.247274553" watchObservedRunningTime="2026-04-17 02:40:38.163177791 +0000 UTC m=+1.260295974" Apr 17 02:40:38.163525 kubelet[2774]: I0417 02:40:38.163397 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.163392097 podStartE2EDuration="1.163392097s" podCreationTimestamp="2026-04-17 02:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:40:38.163101666 +0000 UTC m=+1.260219849" watchObservedRunningTime="2026-04-17 02:40:38.163392097 +0000 UTC m=+1.260510285" Apr 17 02:40:39.070912 kubelet[2774]: E0417 02:40:39.070793 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:39.070912 kubelet[2774]: E0417 02:40:39.070815 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:40.076091 kubelet[2774]: E0417 02:40:40.075890 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:40.520206 kubelet[2774]: E0417 02:40:40.520119 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:42.565090 kubelet[2774]: I0417 02:40:42.564984 2774 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 02:40:42.566580 containerd[1607]: time="2026-04-17T02:40:42.566523349Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 02:40:42.566891 kubelet[2774]: I0417 02:40:42.566872 2774 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 02:40:43.343245 kubelet[2774]: E0417 02:40:43.341957 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:43.681810 systemd[1]: Created slice kubepods-besteffort-pod5e6f3108_ce1a_43d2_91ba_43ca07a8c66c.slice - libcontainer container kubepods-besteffort-pod5e6f3108_ce1a_43d2_91ba_43ca07a8c66c.slice. Apr 17 02:40:43.771286 kubelet[2774]: I0417 02:40:43.771176 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e6f3108-ce1a-43d2-91ba-43ca07a8c66c-kube-proxy\") pod \"kube-proxy-zch8r\" (UID: \"5e6f3108-ce1a-43d2-91ba-43ca07a8c66c\") " pod="kube-system/kube-proxy-zch8r" Apr 17 02:40:43.778905 kubelet[2774]: I0417 02:40:43.776602 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4927\" (UniqueName: \"kubernetes.io/projected/5e6f3108-ce1a-43d2-91ba-43ca07a8c66c-kube-api-access-w4927\") pod \"kube-proxy-zch8r\" (UID: \"5e6f3108-ce1a-43d2-91ba-43ca07a8c66c\") " pod="kube-system/kube-proxy-zch8r" Apr 17 02:40:43.778905 kubelet[2774]: I0417 02:40:43.777078 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e6f3108-ce1a-43d2-91ba-43ca07a8c66c-xtables-lock\") pod \"kube-proxy-zch8r\" (UID: \"5e6f3108-ce1a-43d2-91ba-43ca07a8c66c\") " pod="kube-system/kube-proxy-zch8r" Apr 17 02:40:43.778905 kubelet[2774]: I0417 02:40:43.777218 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e6f3108-ce1a-43d2-91ba-43ca07a8c66c-lib-modules\") pod \"kube-proxy-zch8r\" (UID: \"5e6f3108-ce1a-43d2-91ba-43ca07a8c66c\") " pod="kube-system/kube-proxy-zch8r" Apr 17 02:40:43.803161 systemd[1]: Created slice kubepods-besteffort-pod5188fbad_9a6b_497f_9834_5e8885076032.slice - libcontainer container kubepods-besteffort-pod5188fbad_9a6b_497f_9834_5e8885076032.slice. Apr 17 02:40:43.879773 kubelet[2774]: I0417 02:40:43.879610 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl86l\" (UniqueName: \"kubernetes.io/projected/5188fbad-9a6b-497f-9834-5e8885076032-kube-api-access-zl86l\") pod \"tigera-operator-6bf85f8dd-dq27f\" (UID: \"5188fbad-9a6b-497f-9834-5e8885076032\") " pod="tigera-operator/tigera-operator-6bf85f8dd-dq27f" Apr 17 02:40:43.879773 kubelet[2774]: I0417 02:40:43.879758 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5188fbad-9a6b-497f-9834-5e8885076032-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-dq27f\" (UID: \"5188fbad-9a6b-497f-9834-5e8885076032\") " pod="tigera-operator/tigera-operator-6bf85f8dd-dq27f" Apr 17 02:40:43.989016 kubelet[2774]: E0417 02:40:43.988177 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:43.989904 containerd[1607]: time="2026-04-17T02:40:43.989792655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zch8r,Uid:5e6f3108-ce1a-43d2-91ba-43ca07a8c66c,Namespace:kube-system,Attempt:0,}" Apr 17 02:40:44.013525 containerd[1607]: time="2026-04-17T02:40:44.013469101Z" level=info msg="connecting to shim 0d4f2de4d2e954f6c1741ba36155ddbd5ed53af2f5b8d7cfa1a8a115f4bf10d5" address="unix:///run/containerd/s/24c0493891ad984301360000487126c379707f0ccc8c098d0783f68d4b84c100" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:40:44.032091 systemd[1]: Started cri-containerd-0d4f2de4d2e954f6c1741ba36155ddbd5ed53af2f5b8d7cfa1a8a115f4bf10d5.scope - libcontainer container 0d4f2de4d2e954f6c1741ba36155ddbd5ed53af2f5b8d7cfa1a8a115f4bf10d5. Apr 17 02:40:44.052523 containerd[1607]: time="2026-04-17T02:40:44.052486061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zch8r,Uid:5e6f3108-ce1a-43d2-91ba-43ca07a8c66c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d4f2de4d2e954f6c1741ba36155ddbd5ed53af2f5b8d7cfa1a8a115f4bf10d5\"" Apr 17 02:40:44.053282 kubelet[2774]: E0417 02:40:44.053259 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:44.062496 containerd[1607]: time="2026-04-17T02:40:44.062247027Z" level=info msg="CreateContainer within sandbox \"0d4f2de4d2e954f6c1741ba36155ddbd5ed53af2f5b8d7cfa1a8a115f4bf10d5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 02:40:44.072738 containerd[1607]: time="2026-04-17T02:40:44.072630356Z" level=info msg="Container 360748529ac0433cb0f2a7bf436857f4a3700ac8113ad5dfea492ddd75e00a29: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:40:44.084258 containerd[1607]: time="2026-04-17T02:40:44.084194166Z" level=info msg="CreateContainer within sandbox \"0d4f2de4d2e954f6c1741ba36155ddbd5ed53af2f5b8d7cfa1a8a115f4bf10d5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"360748529ac0433cb0f2a7bf436857f4a3700ac8113ad5dfea492ddd75e00a29\"" Apr 17 02:40:44.085373 containerd[1607]: time="2026-04-17T02:40:44.085344575Z" level=info msg="StartContainer for \"360748529ac0433cb0f2a7bf436857f4a3700ac8113ad5dfea492ddd75e00a29\"" Apr 17 02:40:44.086571 containerd[1607]: time="2026-04-17T02:40:44.086532797Z" level=info msg="connecting to shim 360748529ac0433cb0f2a7bf436857f4a3700ac8113ad5dfea492ddd75e00a29" address="unix:///run/containerd/s/24c0493891ad984301360000487126c379707f0ccc8c098d0783f68d4b84c100" protocol=ttrpc version=3 Apr 17 02:40:44.101886 kubelet[2774]: E0417 02:40:44.101846 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:44.104221 systemd[1]: Started cri-containerd-360748529ac0433cb0f2a7bf436857f4a3700ac8113ad5dfea492ddd75e00a29.scope - libcontainer container 360748529ac0433cb0f2a7bf436857f4a3700ac8113ad5dfea492ddd75e00a29. Apr 17 02:40:44.107903 containerd[1607]: time="2026-04-17T02:40:44.107839036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-dq27f,Uid:5188fbad-9a6b-497f-9834-5e8885076032,Namespace:tigera-operator,Attempt:0,}" Apr 17 02:40:44.128432 containerd[1607]: time="2026-04-17T02:40:44.128371699Z" level=info msg="connecting to shim 9c205e89985a03e93c7028129db33452ba141f4f590bdaee2e9f0cfa64cbd0e8" address="unix:///run/containerd/s/1b4f0cc2aa226bce0f9ff209a3772709005c8eae9ccf372263cec27801681dac" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:40:44.152902 systemd[1]: Started cri-containerd-9c205e89985a03e93c7028129db33452ba141f4f590bdaee2e9f0cfa64cbd0e8.scope - libcontainer container 9c205e89985a03e93c7028129db33452ba141f4f590bdaee2e9f0cfa64cbd0e8. Apr 17 02:40:44.160544 containerd[1607]: time="2026-04-17T02:40:44.160519215Z" level=info msg="StartContainer for \"360748529ac0433cb0f2a7bf436857f4a3700ac8113ad5dfea492ddd75e00a29\" returns successfully" Apr 17 02:40:44.213383 containerd[1607]: time="2026-04-17T02:40:44.213280023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-dq27f,Uid:5188fbad-9a6b-497f-9834-5e8885076032,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9c205e89985a03e93c7028129db33452ba141f4f590bdaee2e9f0cfa64cbd0e8\"" Apr 17 02:40:44.216394 containerd[1607]: time="2026-04-17T02:40:44.216324416Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 02:40:45.111353 kubelet[2774]: E0417 02:40:45.111088 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:45.126983 kubelet[2774]: I0417 02:40:45.126820 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zch8r" podStartSLOduration=2.126715038 podStartE2EDuration="2.126715038s" podCreationTimestamp="2026-04-17 02:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:40:45.126395616 +0000 UTC m=+8.223513806" watchObservedRunningTime="2026-04-17 02:40:45.126715038 +0000 UTC m=+8.223833226" Apr 17 02:40:45.355670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494948551.mount: Deactivated successfully. Apr 17 02:40:46.115374 kubelet[2774]: E0417 02:40:46.115305 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:46.157951 containerd[1607]: time="2026-04-17T02:40:46.157771668Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:46.158487 containerd[1607]: time="2026-04-17T02:40:46.158198843Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 02:40:46.159615 containerd[1607]: time="2026-04-17T02:40:46.159564506Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:46.161517 containerd[1607]: time="2026-04-17T02:40:46.161476380Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:40:46.162047 containerd[1607]: time="2026-04-17T02:40:46.161993910Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.945601236s" Apr 17 02:40:46.162047 containerd[1607]: time="2026-04-17T02:40:46.162030054Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 02:40:46.169643 containerd[1607]: time="2026-04-17T02:40:46.169591926Z" level=info msg="CreateContainer within sandbox \"9c205e89985a03e93c7028129db33452ba141f4f590bdaee2e9f0cfa64cbd0e8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 02:40:46.184520 containerd[1607]: time="2026-04-17T02:40:46.183908147Z" level=info msg="Container 244aa941455fe31c31ac79345c533c729077af60c3974fcdf92821f7a4be658d: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:40:46.194455 containerd[1607]: time="2026-04-17T02:40:46.194371618Z" level=info msg="CreateContainer within sandbox \"9c205e89985a03e93c7028129db33452ba141f4f590bdaee2e9f0cfa64cbd0e8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"244aa941455fe31c31ac79345c533c729077af60c3974fcdf92821f7a4be658d\"" Apr 17 02:40:46.195985 containerd[1607]: time="2026-04-17T02:40:46.195158878Z" level=info msg="StartContainer for \"244aa941455fe31c31ac79345c533c729077af60c3974fcdf92821f7a4be658d\"" Apr 17 02:40:46.195985 containerd[1607]: time="2026-04-17T02:40:46.195720035Z" level=info msg="connecting to shim 244aa941455fe31c31ac79345c533c729077af60c3974fcdf92821f7a4be658d" address="unix:///run/containerd/s/1b4f0cc2aa226bce0f9ff209a3772709005c8eae9ccf372263cec27801681dac" protocol=ttrpc version=3 Apr 17 02:40:46.241162 systemd[1]: Started cri-containerd-244aa941455fe31c31ac79345c533c729077af60c3974fcdf92821f7a4be658d.scope - libcontainer container 244aa941455fe31c31ac79345c533c729077af60c3974fcdf92821f7a4be658d. Apr 17 02:40:46.271345 containerd[1607]: time="2026-04-17T02:40:46.271272639Z" level=info msg="StartContainer for \"244aa941455fe31c31ac79345c533c729077af60c3974fcdf92821f7a4be658d\" returns successfully" Apr 17 02:40:47.130867 kubelet[2774]: I0417 02:40:47.130704 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-dq27f" podStartSLOduration=2.182119161 podStartE2EDuration="4.130624083s" podCreationTimestamp="2026-04-17 02:40:43 +0000 UTC" firstStartedPulling="2026-04-17 02:40:44.215100491 +0000 UTC m=+7.312218682" lastFinishedPulling="2026-04-17 02:40:46.163605429 +0000 UTC m=+9.260723604" observedRunningTime="2026-04-17 02:40:47.130327845 +0000 UTC m=+10.227446028" watchObservedRunningTime="2026-04-17 02:40:47.130624083 +0000 UTC m=+10.227742265" Apr 17 02:40:47.586078 kubelet[2774]: E0417 02:40:47.585977 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:48.155961 kubelet[2774]: E0417 02:40:48.155877 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:49.160905 kubelet[2774]: E0417 02:40:49.160091 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:50.525950 kubelet[2774]: E0417 02:40:50.525801 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:51.172961 kubelet[2774]: E0417 02:40:51.172808 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:52.302334 sudo[1825]: pam_unix(sudo:session): session closed for user root Apr 17 02:40:52.308052 sshd[1824]: Connection closed by 10.0.0.1 port 57586 Apr 17 02:40:52.306081 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Apr 17 02:40:52.314519 systemd[1]: sshd@6-10.0.0.8:22-10.0.0.1:57586.service: Deactivated successfully. Apr 17 02:40:52.320872 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 02:40:52.321080 systemd[1]: session-7.scope: Consumed 7.810s CPU time, 232.5M memory peak. Apr 17 02:40:52.334153 systemd-logind[1598]: Session 7 logged out. Waiting for processes to exit. Apr 17 02:40:52.337215 systemd-logind[1598]: Removed session 7. Apr 17 02:40:53.264960 update_engine[1599]: I20260417 02:40:53.264174 1599 update_attempter.cc:509] Updating boot flags... Apr 17 02:40:55.825285 systemd[1]: Created slice kubepods-besteffort-pod8441339d_20e8_4a8f_85f8_dd6730dfcaf2.slice - libcontainer container kubepods-besteffort-pod8441339d_20e8_4a8f_85f8_dd6730dfcaf2.slice. Apr 17 02:40:55.901536 kubelet[2774]: I0417 02:40:55.901473 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8441339d-20e8-4a8f-85f8-dd6730dfcaf2-tigera-ca-bundle\") pod \"calico-typha-75ff9d4458-bqjx4\" (UID: \"8441339d-20e8-4a8f-85f8-dd6730dfcaf2\") " pod="calico-system/calico-typha-75ff9d4458-bqjx4" Apr 17 02:40:55.903067 kubelet[2774]: I0417 02:40:55.902998 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8441339d-20e8-4a8f-85f8-dd6730dfcaf2-typha-certs\") pod \"calico-typha-75ff9d4458-bqjx4\" (UID: \"8441339d-20e8-4a8f-85f8-dd6730dfcaf2\") " pod="calico-system/calico-typha-75ff9d4458-bqjx4" Apr 17 02:40:55.903185 kubelet[2774]: I0417 02:40:55.903076 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbtrp\" (UniqueName: \"kubernetes.io/projected/8441339d-20e8-4a8f-85f8-dd6730dfcaf2-kube-api-access-tbtrp\") pod \"calico-typha-75ff9d4458-bqjx4\" (UID: \"8441339d-20e8-4a8f-85f8-dd6730dfcaf2\") " pod="calico-system/calico-typha-75ff9d4458-bqjx4" Apr 17 02:40:56.006138 kubelet[2774]: I0417 02:40:56.004597 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-node-certs\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006138 kubelet[2774]: I0417 02:40:56.004666 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-policysync\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006138 kubelet[2774]: I0417 02:40:56.004680 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-sys-fs\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006138 kubelet[2774]: I0417 02:40:56.004693 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vf82\" (UniqueName: \"kubernetes.io/projected/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-kube-api-access-6vf82\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006138 kubelet[2774]: I0417 02:40:56.004791 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-var-run-calico\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006788 kubelet[2774]: I0417 02:40:56.004805 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-flexvol-driver-host\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006788 kubelet[2774]: I0417 02:40:56.004825 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-bpffs\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006788 kubelet[2774]: I0417 02:40:56.004838 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-cni-bin-dir\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006788 kubelet[2774]: I0417 02:40:56.004859 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-cni-net-dir\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006788 kubelet[2774]: I0417 02:40:56.004872 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-xtables-lock\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006917 kubelet[2774]: I0417 02:40:56.004888 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-lib-modules\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006917 kubelet[2774]: I0417 02:40:56.004901 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-tigera-ca-bundle\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.006917 kubelet[2774]: I0417 02:40:56.004913 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-cni-log-dir\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.007154 kubelet[2774]: I0417 02:40:56.007087 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-nodeproc\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.007207 kubelet[2774]: I0417 02:40:56.007198 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42d0cc5d-f412-40fa-a2b8-cea8d32e3139-var-lib-calico\") pod \"calico-node-7rfvn\" (UID: \"42d0cc5d-f412-40fa-a2b8-cea8d32e3139\") " pod="calico-system/calico-node-7rfvn" Apr 17 02:40:56.012591 systemd[1]: Created slice kubepods-besteffort-pod42d0cc5d_f412_40fa_a2b8_cea8d32e3139.slice - libcontainer container kubepods-besteffort-pod42d0cc5d_f412_40fa_a2b8_cea8d32e3139.slice. Apr 17 02:40:56.115026 kubelet[2774]: E0417 02:40:56.114726 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.115026 kubelet[2774]: W0417 02:40:56.114833 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.115026 kubelet[2774]: E0417 02:40:56.114897 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.116291 kubelet[2774]: E0417 02:40:56.115326 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.116291 kubelet[2774]: W0417 02:40:56.115340 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.116291 kubelet[2774]: E0417 02:40:56.115352 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.116291 kubelet[2774]: E0417 02:40:56.115974 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:40:56.116291 kubelet[2774]: E0417 02:40:56.116087 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.116291 kubelet[2774]: W0417 02:40:56.116098 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.116291 kubelet[2774]: E0417 02:40:56.116110 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.116455 kubelet[2774]: E0417 02:40:56.116345 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.116455 kubelet[2774]: W0417 02:40:56.116352 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.116455 kubelet[2774]: E0417 02:40:56.116359 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.118462 kubelet[2774]: E0417 02:40:56.117963 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.118462 kubelet[2774]: W0417 02:40:56.117975 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.118462 kubelet[2774]: E0417 02:40:56.117986 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.120265 kubelet[2774]: E0417 02:40:56.120179 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.120265 kubelet[2774]: W0417 02:40:56.120252 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.120347 kubelet[2774]: E0417 02:40:56.120288 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.122447 kubelet[2774]: E0417 02:40:56.122399 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.122447 kubelet[2774]: W0417 02:40:56.122426 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.122447 kubelet[2774]: E0417 02:40:56.122437 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.122711 kubelet[2774]: E0417 02:40:56.122675 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.122711 kubelet[2774]: W0417 02:40:56.122703 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.122776 kubelet[2774]: E0417 02:40:56.122715 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.122994 kubelet[2774]: E0417 02:40:56.122968 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.122994 kubelet[2774]: W0417 02:40:56.122990 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.123088 kubelet[2774]: E0417 02:40:56.122998 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.125020 kubelet[2774]: E0417 02:40:56.123122 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.125020 kubelet[2774]: W0417 02:40:56.123128 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.125020 kubelet[2774]: E0417 02:40:56.123134 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.125020 kubelet[2774]: E0417 02:40:56.123477 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.125020 kubelet[2774]: W0417 02:40:56.123484 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.125020 kubelet[2774]: E0417 02:40:56.123491 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.125020 kubelet[2774]: E0417 02:40:56.123905 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.125020 kubelet[2774]: W0417 02:40:56.123913 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.125020 kubelet[2774]: E0417 02:40:56.123957 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.126384 kubelet[2774]: E0417 02:40:56.126301 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.126384 kubelet[2774]: W0417 02:40:56.126379 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.126504 kubelet[2774]: E0417 02:40:56.126460 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.126996 kubelet[2774]: E0417 02:40:56.126967 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.126996 kubelet[2774]: W0417 02:40:56.126992 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.127063 kubelet[2774]: E0417 02:40:56.127026 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.127584 kubelet[2774]: E0417 02:40:56.127536 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.127584 kubelet[2774]: W0417 02:40:56.127559 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.127584 kubelet[2774]: E0417 02:40:56.127569 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.128018 kubelet[2774]: E0417 02:40:56.127995 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.128018 kubelet[2774]: W0417 02:40:56.128015 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.128069 kubelet[2774]: E0417 02:40:56.128022 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.128493 kubelet[2774]: E0417 02:40:56.128477 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.128493 kubelet[2774]: W0417 02:40:56.128485 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.128493 kubelet[2774]: E0417 02:40:56.128493 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.128734 kubelet[2774]: E0417 02:40:56.128694 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.128734 kubelet[2774]: W0417 02:40:56.128716 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.128734 kubelet[2774]: E0417 02:40:56.128723 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.129026 kubelet[2774]: E0417 02:40:56.129007 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.129026 kubelet[2774]: W0417 02:40:56.129025 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.129076 kubelet[2774]: E0417 02:40:56.129033 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.129560 kubelet[2774]: E0417 02:40:56.129536 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.129560 kubelet[2774]: W0417 02:40:56.129556 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.129605 kubelet[2774]: E0417 02:40:56.129564 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.130710 kubelet[2774]: E0417 02:40:56.130582 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.130710 kubelet[2774]: W0417 02:40:56.130697 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.130710 kubelet[2774]: E0417 02:40:56.130845 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.132090 kubelet[2774]: E0417 02:40:56.131589 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.132090 kubelet[2774]: W0417 02:40:56.131599 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.132090 kubelet[2774]: E0417 02:40:56.131608 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.134086 kubelet[2774]: E0417 02:40:56.133406 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.134458 kubelet[2774]: W0417 02:40:56.133913 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.135056 kubelet[2774]: E0417 02:40:56.134384 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.136316 kubelet[2774]: E0417 02:40:56.136249 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:56.136748 kubelet[2774]: E0417 02:40:56.136707 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.136912 kubelet[2774]: W0417 02:40:56.136871 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.136912 kubelet[2774]: E0417 02:40:56.136898 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.137726 kubelet[2774]: E0417 02:40:56.137647 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.138072 kubelet[2774]: W0417 02:40:56.138042 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.138072 kubelet[2774]: E0417 02:40:56.138071 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.141969 containerd[1607]: time="2026-04-17T02:40:56.141608518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75ff9d4458-bqjx4,Uid:8441339d-20e8-4a8f-85f8-dd6730dfcaf2,Namespace:calico-system,Attempt:0,}" Apr 17 02:40:56.142660 kubelet[2774]: E0417 02:40:56.142398 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.142660 kubelet[2774]: W0417 02:40:56.142415 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.142660 kubelet[2774]: E0417 02:40:56.142435 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.142910 kubelet[2774]: E0417 02:40:56.142790 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.142910 kubelet[2774]: W0417 02:40:56.142899 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.142988 kubelet[2774]: E0417 02:40:56.142912 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.162954 kubelet[2774]: E0417 02:40:56.160857 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.162954 kubelet[2774]: W0417 02:40:56.162018 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.162954 kubelet[2774]: E0417 02:40:56.162185 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.183161 kubelet[2774]: E0417 02:40:56.183048 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.183537 kubelet[2774]: W0417 02:40:56.183368 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.183537 kubelet[2774]: E0417 02:40:56.183444 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.210831 kubelet[2774]: E0417 02:40:56.210744 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.210831 kubelet[2774]: W0417 02:40:56.210774 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.210831 kubelet[2774]: E0417 02:40:56.210837 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.211269 kubelet[2774]: E0417 02:40:56.211246 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.211294 kubelet[2774]: W0417 02:40:56.211270 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.211294 kubelet[2774]: E0417 02:40:56.211282 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.211454 kubelet[2774]: E0417 02:40:56.211436 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.211476 kubelet[2774]: W0417 02:40:56.211455 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.211476 kubelet[2774]: E0417 02:40:56.211464 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.212022 kubelet[2774]: E0417 02:40:56.211998 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.212056 kubelet[2774]: W0417 02:40:56.212025 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.212056 kubelet[2774]: E0417 02:40:56.212038 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.214998 kubelet[2774]: E0417 02:40:56.214087 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.214998 kubelet[2774]: W0417 02:40:56.214137 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.214998 kubelet[2774]: E0417 02:40:56.214252 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.215437 kubelet[2774]: E0417 02:40:56.215402 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.215437 kubelet[2774]: W0417 02:40:56.215430 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.215485 kubelet[2774]: E0417 02:40:56.215441 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.216092 kubelet[2774]: E0417 02:40:56.216057 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.216092 kubelet[2774]: W0417 02:40:56.216080 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.216092 kubelet[2774]: E0417 02:40:56.216090 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.227459 kubelet[2774]: E0417 02:40:56.218916 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.228210 kubelet[2774]: W0417 02:40:56.228032 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.228329 kubelet[2774]: E0417 02:40:56.228288 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.228704 containerd[1607]: time="2026-04-17T02:40:56.228549437Z" level=info msg="connecting to shim 84964f443f5584ffac310a28ec742108e2a15c3f9b961df3f6cc0e9d3df6e731" address="unix:///run/containerd/s/f49150e33705b04329ee8fd1d8105f5403989f55a99d45e83fe6ec684213d9e1" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:40:56.229234 kubelet[2774]: E0417 02:40:56.229205 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.229234 kubelet[2774]: W0417 02:40:56.229231 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.230302 kubelet[2774]: E0417 02:40:56.229320 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.233425 kubelet[2774]: E0417 02:40:56.233167 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.236270 kubelet[2774]: W0417 02:40:56.236121 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.236875 kubelet[2774]: E0417 02:40:56.236770 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.237599 kubelet[2774]: E0417 02:40:56.237588 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.237852 kubelet[2774]: W0417 02:40:56.237742 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.237852 kubelet[2774]: E0417 02:40:56.237756 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.238111 kubelet[2774]: E0417 02:40:56.238104 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.238318 kubelet[2774]: W0417 02:40:56.238153 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.238318 kubelet[2774]: E0417 02:40:56.238163 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.239085 kubelet[2774]: E0417 02:40:56.239063 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.239136 kubelet[2774]: W0417 02:40:56.239128 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.239219 kubelet[2774]: E0417 02:40:56.239211 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.239387 kubelet[2774]: E0417 02:40:56.239380 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.239422 kubelet[2774]: W0417 02:40:56.239417 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.239451 kubelet[2774]: E0417 02:40:56.239444 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.239607 kubelet[2774]: E0417 02:40:56.239601 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.239664 kubelet[2774]: W0417 02:40:56.239658 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.239693 kubelet[2774]: E0417 02:40:56.239688 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.239807 kubelet[2774]: E0417 02:40:56.239802 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.239836 kubelet[2774]: W0417 02:40:56.239831 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.239866 kubelet[2774]: E0417 02:40:56.239861 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.240166 kubelet[2774]: E0417 02:40:56.240088 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.240166 kubelet[2774]: W0417 02:40:56.240095 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.240166 kubelet[2774]: E0417 02:40:56.240101 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.240302 kubelet[2774]: E0417 02:40:56.240275 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.240320 kubelet[2774]: W0417 02:40:56.240302 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.240320 kubelet[2774]: E0417 02:40:56.240314 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.240485 kubelet[2774]: E0417 02:40:56.240469 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.240506 kubelet[2774]: W0417 02:40:56.240486 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.240506 kubelet[2774]: E0417 02:40:56.240492 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.240664 kubelet[2774]: E0417 02:40:56.240627 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.240664 kubelet[2774]: W0417 02:40:56.240663 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.240699 kubelet[2774]: E0417 02:40:56.240669 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.241499 kubelet[2774]: E0417 02:40:56.241090 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.241499 kubelet[2774]: W0417 02:40:56.241119 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.241499 kubelet[2774]: E0417 02:40:56.241127 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.241499 kubelet[2774]: I0417 02:40:56.241320 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7195f3e9-744b-4ed0-a1cb-10c30647c614-varrun\") pod \"csi-node-driver-5kt7f\" (UID: \"7195f3e9-744b-4ed0-a1cb-10c30647c614\") " pod="calico-system/csi-node-driver-5kt7f" Apr 17 02:40:56.241499 kubelet[2774]: E0417 02:40:56.241409 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.241499 kubelet[2774]: W0417 02:40:56.241414 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.241499 kubelet[2774]: E0417 02:40:56.241420 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.241666 kubelet[2774]: E0417 02:40:56.241563 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.241666 kubelet[2774]: W0417 02:40:56.241568 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.241666 kubelet[2774]: E0417 02:40:56.241573 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.241903 kubelet[2774]: E0417 02:40:56.241856 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.241903 kubelet[2774]: W0417 02:40:56.241896 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.242004 kubelet[2774]: E0417 02:40:56.241904 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.242123 kubelet[2774]: I0417 02:40:56.242005 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdh6\" (UniqueName: \"kubernetes.io/projected/7195f3e9-744b-4ed0-a1cb-10c30647c614-kube-api-access-vmdh6\") pod \"csi-node-driver-5kt7f\" (UID: \"7195f3e9-744b-4ed0-a1cb-10c30647c614\") " pod="calico-system/csi-node-driver-5kt7f" Apr 17 02:40:56.242391 kubelet[2774]: E0417 02:40:56.242357 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.242391 kubelet[2774]: W0417 02:40:56.242385 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.242463 kubelet[2774]: E0417 02:40:56.242396 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.242594 kubelet[2774]: E0417 02:40:56.242572 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.242594 kubelet[2774]: W0417 02:40:56.242584 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.242594 kubelet[2774]: E0417 02:40:56.242590 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.242785 kubelet[2774]: E0417 02:40:56.242746 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.242785 kubelet[2774]: W0417 02:40:56.242752 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.242785 kubelet[2774]: E0417 02:40:56.242757 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.242842 kubelet[2774]: I0417 02:40:56.242797 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7195f3e9-744b-4ed0-a1cb-10c30647c614-kubelet-dir\") pod \"csi-node-driver-5kt7f\" (UID: \"7195f3e9-744b-4ed0-a1cb-10c30647c614\") " pod="calico-system/csi-node-driver-5kt7f" Apr 17 02:40:56.243193 kubelet[2774]: E0417 02:40:56.243122 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.243193 kubelet[2774]: W0417 02:40:56.243132 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.243193 kubelet[2774]: E0417 02:40:56.243138 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.243384 kubelet[2774]: E0417 02:40:56.243310 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.243384 kubelet[2774]: W0417 02:40:56.243322 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.243384 kubelet[2774]: E0417 02:40:56.243331 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.243858 kubelet[2774]: E0417 02:40:56.243819 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.243858 kubelet[2774]: W0417 02:40:56.243851 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.243909 kubelet[2774]: E0417 02:40:56.243863 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.244223 kubelet[2774]: I0417 02:40:56.244167 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7195f3e9-744b-4ed0-a1cb-10c30647c614-registration-dir\") pod \"csi-node-driver-5kt7f\" (UID: \"7195f3e9-744b-4ed0-a1cb-10c30647c614\") " pod="calico-system/csi-node-driver-5kt7f" Apr 17 02:40:56.244442 kubelet[2774]: E0417 02:40:56.244404 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.244442 kubelet[2774]: W0417 02:40:56.244434 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.244520 kubelet[2774]: E0417 02:40:56.244444 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.244520 kubelet[2774]: I0417 02:40:56.244488 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7195f3e9-744b-4ed0-a1cb-10c30647c614-socket-dir\") pod \"csi-node-driver-5kt7f\" (UID: \"7195f3e9-744b-4ed0-a1cb-10c30647c614\") " pod="calico-system/csi-node-driver-5kt7f" Apr 17 02:40:56.244813 kubelet[2774]: E0417 02:40:56.244785 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.244813 kubelet[2774]: W0417 02:40:56.244812 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.244964 kubelet[2774]: E0417 02:40:56.244823 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.245123 kubelet[2774]: E0417 02:40:56.245071 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.245123 kubelet[2774]: W0417 02:40:56.245097 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.245123 kubelet[2774]: E0417 02:40:56.245106 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.246296 kubelet[2774]: E0417 02:40:56.246191 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.246296 kubelet[2774]: W0417 02:40:56.246316 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.246666 kubelet[2774]: E0417 02:40:56.246416 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.247663 kubelet[2774]: E0417 02:40:56.247604 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.247663 kubelet[2774]: W0417 02:40:56.247627 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.247663 kubelet[2774]: E0417 02:40:56.247655 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.276047 systemd[1]: Started cri-containerd-84964f443f5584ffac310a28ec742108e2a15c3f9b961df3f6cc0e9d3df6e731.scope - libcontainer container 84964f443f5584ffac310a28ec742108e2a15c3f9b961df3f6cc0e9d3df6e731. Apr 17 02:40:56.336664 containerd[1607]: time="2026-04-17T02:40:56.336542303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7rfvn,Uid:42d0cc5d-f412-40fa-a2b8-cea8d32e3139,Namespace:calico-system,Attempt:0,}" Apr 17 02:40:56.351606 kubelet[2774]: E0417 02:40:56.350513 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.354693 kubelet[2774]: W0417 02:40:56.351713 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.354693 kubelet[2774]: E0417 02:40:56.352091 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.356760 kubelet[2774]: E0417 02:40:56.356184 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.356760 kubelet[2774]: W0417 02:40:56.356456 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.357058 kubelet[2774]: E0417 02:40:56.356774 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.359492 kubelet[2774]: E0417 02:40:56.358543 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.361200 kubelet[2774]: W0417 02:40:56.360881 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.361526 kubelet[2774]: E0417 02:40:56.361430 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.362994 kubelet[2774]: E0417 02:40:56.362226 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.362994 kubelet[2774]: W0417 02:40:56.362244 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.362994 kubelet[2774]: E0417 02:40:56.362264 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.367732 kubelet[2774]: E0417 02:40:56.363274 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.367732 kubelet[2774]: W0417 02:40:56.363302 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.367732 kubelet[2774]: E0417 02:40:56.363729 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.367732 kubelet[2774]: E0417 02:40:56.364980 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.367732 kubelet[2774]: W0417 02:40:56.365014 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.367732 kubelet[2774]: E0417 02:40:56.365113 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.367732 kubelet[2774]: E0417 02:40:56.366598 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.367732 kubelet[2774]: W0417 02:40:56.367206 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.367732 kubelet[2774]: E0417 02:40:56.367721 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.370846 kubelet[2774]: E0417 02:40:56.369380 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.370846 kubelet[2774]: W0417 02:40:56.369407 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.370846 kubelet[2774]: E0417 02:40:56.369450 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.370846 kubelet[2774]: E0417 02:40:56.369807 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.370846 kubelet[2774]: W0417 02:40:56.369814 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.370846 kubelet[2774]: E0417 02:40:56.369822 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.372469 kubelet[2774]: E0417 02:40:56.371167 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.372469 kubelet[2774]: W0417 02:40:56.371192 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.372469 kubelet[2774]: E0417 02:40:56.371241 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.372469 kubelet[2774]: E0417 02:40:56.371826 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.372469 kubelet[2774]: W0417 02:40:56.371835 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.372469 kubelet[2774]: E0417 02:40:56.371845 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.372469 kubelet[2774]: E0417 02:40:56.372296 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.372469 kubelet[2774]: W0417 02:40:56.372303 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.372469 kubelet[2774]: E0417 02:40:56.372311 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.377841 kubelet[2774]: E0417 02:40:56.374674 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.377841 kubelet[2774]: W0417 02:40:56.374888 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.377841 kubelet[2774]: E0417 02:40:56.378833 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.377841 kubelet[2774]: E0417 02:40:56.380816 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.377841 kubelet[2774]: W0417 02:40:56.380868 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.377841 kubelet[2774]: E0417 02:40:56.381200 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.383008 kubelet[2774]: E0417 02:40:56.382807 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.383008 kubelet[2774]: W0417 02:40:56.382832 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.383008 kubelet[2774]: E0417 02:40:56.382862 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.383201 kubelet[2774]: E0417 02:40:56.383183 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.383201 kubelet[2774]: W0417 02:40:56.383199 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.383244 kubelet[2774]: E0417 02:40:56.383208 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.383375 kubelet[2774]: E0417 02:40:56.383360 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.383428 kubelet[2774]: W0417 02:40:56.383375 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.383428 kubelet[2774]: E0417 02:40:56.383381 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.388133 kubelet[2774]: E0417 02:40:56.388034 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.388133 kubelet[2774]: W0417 02:40:56.388103 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.388133 kubelet[2774]: E0417 02:40:56.388188 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.390971 kubelet[2774]: E0417 02:40:56.390721 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.390971 kubelet[2774]: W0417 02:40:56.390732 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.391055 kubelet[2774]: E0417 02:40:56.390978 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.391249 kubelet[2774]: E0417 02:40:56.391221 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.391290 kubelet[2774]: W0417 02:40:56.391282 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.391290 kubelet[2774]: E0417 02:40:56.391290 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.391564 kubelet[2774]: E0417 02:40:56.391546 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.391564 kubelet[2774]: W0417 02:40:56.391563 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.391601 kubelet[2774]: E0417 02:40:56.391571 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.392005 kubelet[2774]: E0417 02:40:56.391889 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.392207 kubelet[2774]: W0417 02:40:56.392005 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.392207 kubelet[2774]: E0417 02:40:56.392016 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.393687 kubelet[2774]: E0417 02:40:56.393527 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.394533 kubelet[2774]: W0417 02:40:56.393755 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.394533 kubelet[2774]: E0417 02:40:56.393888 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.395244 kubelet[2774]: E0417 02:40:56.395201 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.395244 kubelet[2774]: W0417 02:40:56.395238 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.397004 kubelet[2774]: E0417 02:40:56.395300 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.397004 kubelet[2774]: E0417 02:40:56.395595 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.397004 kubelet[2774]: W0417 02:40:56.395602 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.397004 kubelet[2774]: E0417 02:40:56.395608 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.447495 containerd[1607]: time="2026-04-17T02:40:56.447396479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75ff9d4458-bqjx4,Uid:8441339d-20e8-4a8f-85f8-dd6730dfcaf2,Namespace:calico-system,Attempt:0,} returns sandbox id \"84964f443f5584ffac310a28ec742108e2a15c3f9b961df3f6cc0e9d3df6e731\"" Apr 17 02:40:56.471370 kubelet[2774]: E0417 02:40:56.471184 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:40:56.491078 containerd[1607]: time="2026-04-17T02:40:56.490316387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 02:40:56.491331 kubelet[2774]: E0417 02:40:56.490528 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:40:56.491331 kubelet[2774]: W0417 02:40:56.490542 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:40:56.491331 kubelet[2774]: E0417 02:40:56.490603 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:40:56.502586 containerd[1607]: time="2026-04-17T02:40:56.502052380Z" level=info msg="connecting to shim 016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029" address="unix:///run/containerd/s/82913b0dcd5d2c1c902609f12cb8d07be7e6dc4f1bb58210c3191926b58eca19" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:40:56.563107 systemd[1]: Started cri-containerd-016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029.scope - libcontainer container 016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029. Apr 17 02:40:56.651493 containerd[1607]: time="2026-04-17T02:40:56.650654309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7rfvn,Uid:42d0cc5d-f412-40fa-a2b8-cea8d32e3139,Namespace:calico-system,Attempt:0,} returns sandbox id \"016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029\"" Apr 17 02:40:58.043086 kubelet[2774]: E0417 02:40:58.042906 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:40:58.350165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318548428.mount: Deactivated successfully. Apr 17 02:41:00.052037 kubelet[2774]: E0417 02:41:00.051360 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:00.703692 containerd[1607]: time="2026-04-17T02:41:00.703451327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:00.705799 containerd[1607]: time="2026-04-17T02:41:00.705496954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 02:41:00.710668 containerd[1607]: time="2026-04-17T02:41:00.710480338Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:00.731663 containerd[1607]: time="2026-04-17T02:41:00.731488972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:00.735718 containerd[1607]: time="2026-04-17T02:41:00.735622753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.245250122s" Apr 17 02:41:00.735808 containerd[1607]: time="2026-04-17T02:41:00.735717762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 02:41:00.738345 containerd[1607]: time="2026-04-17T02:41:00.738321897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 02:41:00.769441 containerd[1607]: time="2026-04-17T02:41:00.768616336Z" level=info msg="CreateContainer within sandbox \"84964f443f5584ffac310a28ec742108e2a15c3f9b961df3f6cc0e9d3df6e731\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 02:41:00.801457 containerd[1607]: time="2026-04-17T02:41:00.801353664Z" level=info msg="Container f58cda10112759e25ec0d6ef2f47f2de3d2e5fab4d41e2d8ffb80570ad39d9d1: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:00.824834 containerd[1607]: time="2026-04-17T02:41:00.824708607Z" level=info msg="CreateContainer within sandbox \"84964f443f5584ffac310a28ec742108e2a15c3f9b961df3f6cc0e9d3df6e731\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f58cda10112759e25ec0d6ef2f47f2de3d2e5fab4d41e2d8ffb80570ad39d9d1\"" Apr 17 02:41:00.826703 containerd[1607]: time="2026-04-17T02:41:00.826672296Z" level=info msg="StartContainer for \"f58cda10112759e25ec0d6ef2f47f2de3d2e5fab4d41e2d8ffb80570ad39d9d1\"" Apr 17 02:41:00.830822 containerd[1607]: time="2026-04-17T02:41:00.830572796Z" level=info msg="connecting to shim f58cda10112759e25ec0d6ef2f47f2de3d2e5fab4d41e2d8ffb80570ad39d9d1" address="unix:///run/containerd/s/f49150e33705b04329ee8fd1d8105f5403989f55a99d45e83fe6ec684213d9e1" protocol=ttrpc version=3 Apr 17 02:41:00.864837 systemd[1]: Started cri-containerd-f58cda10112759e25ec0d6ef2f47f2de3d2e5fab4d41e2d8ffb80570ad39d9d1.scope - libcontainer container f58cda10112759e25ec0d6ef2f47f2de3d2e5fab4d41e2d8ffb80570ad39d9d1. Apr 17 02:41:00.988097 containerd[1607]: time="2026-04-17T02:41:00.986570109Z" level=info msg="StartContainer for \"f58cda10112759e25ec0d6ef2f47f2de3d2e5fab4d41e2d8ffb80570ad39d9d1\" returns successfully" Apr 17 02:41:01.311372 kubelet[2774]: E0417 02:41:01.310159 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:01.397822 kubelet[2774]: E0417 02:41:01.397768 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.397822 kubelet[2774]: W0417 02:41:01.397807 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.397822 kubelet[2774]: E0417 02:41:01.397881 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.398818 kubelet[2774]: E0417 02:41:01.398222 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.398818 kubelet[2774]: W0417 02:41:01.398228 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.398818 kubelet[2774]: E0417 02:41:01.398235 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.398818 kubelet[2774]: E0417 02:41:01.398357 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.398818 kubelet[2774]: W0417 02:41:01.398362 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.398818 kubelet[2774]: E0417 02:41:01.398519 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.399011 kubelet[2774]: E0417 02:41:01.398897 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.399011 kubelet[2774]: W0417 02:41:01.398903 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.399011 kubelet[2774]: E0417 02:41:01.398910 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.399058 kubelet[2774]: E0417 02:41:01.399055 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.399075 kubelet[2774]: W0417 02:41:01.399059 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.399075 kubelet[2774]: E0417 02:41:01.399065 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.399154 kubelet[2774]: E0417 02:41:01.399132 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.399174 kubelet[2774]: W0417 02:41:01.399155 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.399174 kubelet[2774]: E0417 02:41:01.399161 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.399269 kubelet[2774]: E0417 02:41:01.399236 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.399287 kubelet[2774]: W0417 02:41:01.399283 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.399328 kubelet[2774]: E0417 02:41:01.399289 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.399388 kubelet[2774]: E0417 02:41:01.399368 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.399407 kubelet[2774]: W0417 02:41:01.399389 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.399407 kubelet[2774]: E0417 02:41:01.399395 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.399519 kubelet[2774]: E0417 02:41:01.399501 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.399538 kubelet[2774]: W0417 02:41:01.399521 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.399538 kubelet[2774]: E0417 02:41:01.399526 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.399618 kubelet[2774]: E0417 02:41:01.399599 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.399658 kubelet[2774]: W0417 02:41:01.399619 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.399658 kubelet[2774]: E0417 02:41:01.399624 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.399742 kubelet[2774]: E0417 02:41:01.399723 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.399760 kubelet[2774]: W0417 02:41:01.399749 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.399760 kubelet[2774]: E0417 02:41:01.399754 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.399967 kubelet[2774]: E0417 02:41:01.399915 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.399989 kubelet[2774]: W0417 02:41:01.399982 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.400003 kubelet[2774]: E0417 02:41:01.399989 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.400185 kubelet[2774]: E0417 02:41:01.400169 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.400203 kubelet[2774]: W0417 02:41:01.400185 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.400203 kubelet[2774]: E0417 02:41:01.400191 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.400376 kubelet[2774]: E0417 02:41:01.400350 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.400398 kubelet[2774]: W0417 02:41:01.400388 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.400398 kubelet[2774]: E0417 02:41:01.400394 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.400498 kubelet[2774]: E0417 02:41:01.400479 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.400552 kubelet[2774]: W0417 02:41:01.400500 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.400552 kubelet[2774]: E0417 02:41:01.400506 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.420831 kubelet[2774]: E0417 02:41:01.420737 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.420831 kubelet[2774]: W0417 02:41:01.420816 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.421325 kubelet[2774]: E0417 02:41:01.420988 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.424117 kubelet[2774]: E0417 02:41:01.424077 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.424117 kubelet[2774]: W0417 02:41:01.424107 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.424117 kubelet[2774]: E0417 02:41:01.424123 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.427582 kubelet[2774]: E0417 02:41:01.427438 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.427582 kubelet[2774]: W0417 02:41:01.427526 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.428757 kubelet[2774]: E0417 02:41:01.427694 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.429297 kubelet[2774]: E0417 02:41:01.429157 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.429796 kubelet[2774]: W0417 02:41:01.429583 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.429831 kubelet[2774]: E0417 02:41:01.429788 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.432321 kubelet[2774]: E0417 02:41:01.432235 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.432321 kubelet[2774]: W0417 02:41:01.432301 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.432321 kubelet[2774]: E0417 02:41:01.432370 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.432875 kubelet[2774]: E0417 02:41:01.432719 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.433285 kubelet[2774]: W0417 02:41:01.433103 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.433890 kubelet[2774]: E0417 02:41:01.433661 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.437027 kubelet[2774]: E0417 02:41:01.436900 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.437027 kubelet[2774]: W0417 02:41:01.436990 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.437183 kubelet[2774]: E0417 02:41:01.437055 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.440273 kubelet[2774]: E0417 02:41:01.439883 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.441096 kubelet[2774]: W0417 02:41:01.440367 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.441096 kubelet[2774]: E0417 02:41:01.440493 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.444313 kubelet[2774]: E0417 02:41:01.444206 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.444313 kubelet[2774]: W0417 02:41:01.444296 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.444516 kubelet[2774]: E0417 02:41:01.444353 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.446891 kubelet[2774]: E0417 02:41:01.446862 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.446891 kubelet[2774]: W0417 02:41:01.446886 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.447032 kubelet[2774]: E0417 02:41:01.446897 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.447163 kubelet[2774]: E0417 02:41:01.447141 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.447187 kubelet[2774]: W0417 02:41:01.447163 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.447187 kubelet[2774]: E0417 02:41:01.447172 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.447443 kubelet[2774]: E0417 02:41:01.447424 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.447468 kubelet[2774]: W0417 02:41:01.447443 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.447468 kubelet[2774]: E0417 02:41:01.447458 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.448403 kubelet[2774]: E0417 02:41:01.448362 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.448403 kubelet[2774]: W0417 02:41:01.448388 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.448403 kubelet[2774]: E0417 02:41:01.448398 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.448907 kubelet[2774]: E0417 02:41:01.448715 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.451816 kubelet[2774]: W0417 02:41:01.451711 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.451816 kubelet[2774]: E0417 02:41:01.451846 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.453158 kubelet[2774]: E0417 02:41:01.453010 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.453158 kubelet[2774]: W0417 02:41:01.453021 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.453158 kubelet[2774]: E0417 02:41:01.453033 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.454088 kubelet[2774]: E0417 02:41:01.454077 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.454145 kubelet[2774]: W0417 02:41:01.454138 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.454189 kubelet[2774]: E0417 02:41:01.454182 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.454534 kubelet[2774]: E0417 02:41:01.454525 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.454584 kubelet[2774]: W0417 02:41:01.454578 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.454619 kubelet[2774]: E0417 02:41:01.454613 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:01.456831 kubelet[2774]: E0417 02:41:01.456711 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:01.458432 kubelet[2774]: W0417 02:41:01.457281 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:01.458432 kubelet[2774]: E0417 02:41:01.457866 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.044389 kubelet[2774]: E0417 02:41:02.044120 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:02.314193 kubelet[2774]: E0417 02:41:02.313145 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:02.366702 kubelet[2774]: I0417 02:41:02.366489 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75ff9d4458-bqjx4" podStartSLOduration=3.117217673 podStartE2EDuration="7.366357781s" podCreationTimestamp="2026-04-17 02:40:55 +0000 UTC" firstStartedPulling="2026-04-17 02:40:56.488454341 +0000 UTC m=+19.585572516" lastFinishedPulling="2026-04-17 02:41:00.737594445 +0000 UTC m=+23.834712624" observedRunningTime="2026-04-17 02:41:01.393801173 +0000 UTC m=+24.490919357" watchObservedRunningTime="2026-04-17 02:41:02.366357781 +0000 UTC m=+25.463476002" Apr 17 02:41:02.384139 kubelet[2774]: E0417 02:41:02.383800 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.387902 kubelet[2774]: W0417 02:41:02.385250 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.388978 kubelet[2774]: E0417 02:41:02.388109 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.391443 kubelet[2774]: E0417 02:41:02.391210 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.394994 kubelet[2774]: W0417 02:41:02.393357 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.394994 kubelet[2774]: E0417 02:41:02.393586 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.394994 kubelet[2774]: E0417 02:41:02.394218 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.394994 kubelet[2774]: W0417 02:41:02.394229 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.394994 kubelet[2774]: E0417 02:41:02.394242 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.394994 kubelet[2774]: E0417 02:41:02.394690 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.394994 kubelet[2774]: W0417 02:41:02.394698 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.394994 kubelet[2774]: E0417 02:41:02.394707 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.397535 kubelet[2774]: E0417 02:41:02.395030 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.397535 kubelet[2774]: W0417 02:41:02.395036 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.397535 kubelet[2774]: E0417 02:41:02.395043 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.397535 kubelet[2774]: E0417 02:41:02.395306 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.397535 kubelet[2774]: W0417 02:41:02.395312 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.397535 kubelet[2774]: E0417 02:41:02.395318 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.397535 kubelet[2774]: E0417 02:41:02.397213 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.397535 kubelet[2774]: W0417 02:41:02.397319 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.397535 kubelet[2774]: E0417 02:41:02.397456 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.403962 kubelet[2774]: E0417 02:41:02.403129 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.403962 kubelet[2774]: W0417 02:41:02.403163 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.403962 kubelet[2774]: E0417 02:41:02.403195 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.404229 kubelet[2774]: E0417 02:41:02.404174 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.404229 kubelet[2774]: W0417 02:41:02.404199 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.404229 kubelet[2774]: E0417 02:41:02.404208 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.404472 kubelet[2774]: E0417 02:41:02.404432 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.404472 kubelet[2774]: W0417 02:41:02.404454 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.404472 kubelet[2774]: E0417 02:41:02.404461 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.405350 kubelet[2774]: E0417 02:41:02.405246 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.405350 kubelet[2774]: W0417 02:41:02.405315 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.405350 kubelet[2774]: E0417 02:41:02.405427 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.407466 kubelet[2774]: E0417 02:41:02.407246 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.407466 kubelet[2774]: W0417 02:41:02.407272 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.407466 kubelet[2774]: E0417 02:41:02.407287 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.410962 kubelet[2774]: E0417 02:41:02.410770 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.410962 kubelet[2774]: W0417 02:41:02.410913 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.411765 kubelet[2774]: E0417 02:41:02.411087 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.414051 kubelet[2774]: E0417 02:41:02.413453 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.414051 kubelet[2774]: W0417 02:41:02.413509 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.414051 kubelet[2774]: E0417 02:41:02.413602 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.420623 kubelet[2774]: E0417 02:41:02.420259 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.420623 kubelet[2774]: W0417 02:41:02.420560 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.428500 kubelet[2774]: E0417 02:41:02.428033 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.484172 kubelet[2774]: E0417 02:41:02.484040 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.484172 kubelet[2774]: W0417 02:41:02.484093 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.484172 kubelet[2774]: E0417 02:41:02.484147 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.484976 kubelet[2774]: E0417 02:41:02.484563 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.484976 kubelet[2774]: W0417 02:41:02.484599 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.484976 kubelet[2774]: E0417 02:41:02.484609 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.484976 kubelet[2774]: E0417 02:41:02.484865 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.484976 kubelet[2774]: W0417 02:41:02.484871 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.484976 kubelet[2774]: E0417 02:41:02.484877 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.485168 kubelet[2774]: E0417 02:41:02.485080 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.485168 kubelet[2774]: W0417 02:41:02.485087 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.485168 kubelet[2774]: E0417 02:41:02.485094 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.485749 kubelet[2774]: E0417 02:41:02.485392 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.485749 kubelet[2774]: W0417 02:41:02.485403 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.485749 kubelet[2774]: E0417 02:41:02.485413 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.486212 kubelet[2774]: E0417 02:41:02.486160 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.486212 kubelet[2774]: W0417 02:41:02.486170 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.486212 kubelet[2774]: E0417 02:41:02.486177 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.487147 kubelet[2774]: E0417 02:41:02.486490 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.487147 kubelet[2774]: W0417 02:41:02.486499 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.487147 kubelet[2774]: E0417 02:41:02.486506 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.487147 kubelet[2774]: E0417 02:41:02.487055 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.487147 kubelet[2774]: W0417 02:41:02.487062 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.487147 kubelet[2774]: E0417 02:41:02.487069 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.487391 kubelet[2774]: E0417 02:41:02.487353 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.487391 kubelet[2774]: W0417 02:41:02.487360 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.487391 kubelet[2774]: E0417 02:41:02.487369 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.487682 kubelet[2774]: E0417 02:41:02.487662 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.487682 kubelet[2774]: W0417 02:41:02.487681 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.487734 kubelet[2774]: E0417 02:41:02.487690 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.488179 kubelet[2774]: E0417 02:41:02.488139 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.488179 kubelet[2774]: W0417 02:41:02.488174 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.488251 kubelet[2774]: E0417 02:41:02.488182 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.489748 kubelet[2774]: E0417 02:41:02.489595 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.489748 kubelet[2774]: W0417 02:41:02.489699 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.489748 kubelet[2774]: E0417 02:41:02.489843 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.491516 kubelet[2774]: E0417 02:41:02.491415 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.491516 kubelet[2774]: W0417 02:41:02.491425 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.491516 kubelet[2774]: E0417 02:41:02.491436 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.493253 kubelet[2774]: E0417 02:41:02.493161 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.493253 kubelet[2774]: W0417 02:41:02.493236 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.493253 kubelet[2774]: E0417 02:41:02.493333 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.494521 kubelet[2774]: E0417 02:41:02.493755 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.494521 kubelet[2774]: W0417 02:41:02.493761 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.494521 kubelet[2774]: E0417 02:41:02.493770 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.494521 kubelet[2774]: E0417 02:41:02.494413 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.494521 kubelet[2774]: W0417 02:41:02.494422 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.494521 kubelet[2774]: E0417 02:41:02.494432 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.495440 kubelet[2774]: E0417 02:41:02.494877 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.495440 kubelet[2774]: W0417 02:41:02.494913 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.495440 kubelet[2774]: E0417 02:41:02.494960 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.495490 kubelet[2774]: E0417 02:41:02.495445 2774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 02:41:02.495490 kubelet[2774]: W0417 02:41:02.495453 2774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 02:41:02.495490 kubelet[2774]: E0417 02:41:02.495460 2774 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 02:41:02.731014 containerd[1607]: time="2026-04-17T02:41:02.730725259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:02.734427 containerd[1607]: time="2026-04-17T02:41:02.734118101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 02:41:02.742697 containerd[1607]: time="2026-04-17T02:41:02.742526010Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:02.746428 containerd[1607]: time="2026-04-17T02:41:02.746360261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:02.747444 containerd[1607]: time="2026-04-17T02:41:02.747206827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.00885153s" Apr 17 02:41:02.748415 containerd[1607]: time="2026-04-17T02:41:02.747722988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 02:41:02.785621 containerd[1607]: time="2026-04-17T02:41:02.785481256Z" level=info msg="CreateContainer within sandbox \"016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 02:41:02.839976 containerd[1607]: time="2026-04-17T02:41:02.839820699Z" level=info msg="Container 47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:02.859019 containerd[1607]: time="2026-04-17T02:41:02.858883864Z" level=info msg="CreateContainer within sandbox \"016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b\"" Apr 17 02:41:02.866165 containerd[1607]: time="2026-04-17T02:41:02.865314420Z" level=info msg="StartContainer for \"47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b\"" Apr 17 02:41:02.869868 containerd[1607]: time="2026-04-17T02:41:02.869769361Z" level=info msg="connecting to shim 47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b" address="unix:///run/containerd/s/82913b0dcd5d2c1c902609f12cb8d07be7e6dc4f1bb58210c3191926b58eca19" protocol=ttrpc version=3 Apr 17 02:41:02.999100 systemd[1]: Started cri-containerd-47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b.scope - libcontainer container 47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b. Apr 17 02:41:03.147178 containerd[1607]: time="2026-04-17T02:41:03.147009224Z" level=info msg="StartContainer for \"47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b\" returns successfully" Apr 17 02:41:03.163750 systemd[1]: cri-containerd-47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b.scope: Deactivated successfully. Apr 17 02:41:03.170678 containerd[1607]: time="2026-04-17T02:41:03.170588417Z" level=info msg="received container exit event container_id:\"47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b\" id:\"47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b\" pid:3541 exited_at:{seconds:1776393663 nanos:169571454}" Apr 17 02:41:03.244074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47e3a746032bbd0f2568cc73d2b8af7e7a9bb45aeb8733d6c22d0a3efb4b789b-rootfs.mount: Deactivated successfully. Apr 17 02:41:03.320714 kubelet[2774]: E0417 02:41:03.318662 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:03.334959 containerd[1607]: time="2026-04-17T02:41:03.334257195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 02:41:04.044750 kubelet[2774]: E0417 02:41:04.044477 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:04.331350 kubelet[2774]: E0417 02:41:04.331106 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:06.044615 kubelet[2774]: E0417 02:41:06.044437 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:08.044305 kubelet[2774]: E0417 02:41:08.044113 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:10.046997 kubelet[2774]: E0417 02:41:10.046800 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:12.020048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419887755.mount: Deactivated successfully. Apr 17 02:41:12.044336 kubelet[2774]: E0417 02:41:12.044018 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:12.100537 containerd[1607]: time="2026-04-17T02:41:12.100392125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:12.101534 containerd[1607]: time="2026-04-17T02:41:12.100954845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 02:41:12.118064 containerd[1607]: time="2026-04-17T02:41:12.117860853Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:12.122166 containerd[1607]: time="2026-04-17T02:41:12.122034399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.78772979s" Apr 17 02:41:12.122166 containerd[1607]: time="2026-04-17T02:41:12.122122362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 02:41:12.122585 containerd[1607]: time="2026-04-17T02:41:12.122412746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:12.137382 containerd[1607]: time="2026-04-17T02:41:12.137253914Z" level=info msg="CreateContainer within sandbox \"016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 02:41:12.212554 containerd[1607]: time="2026-04-17T02:41:12.212443060Z" level=info msg="Container 31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:12.272366 containerd[1607]: time="2026-04-17T02:41:12.271514179Z" level=info msg="CreateContainer within sandbox \"016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351\"" Apr 17 02:41:12.280551 containerd[1607]: time="2026-04-17T02:41:12.280408000Z" level=info msg="StartContainer for \"31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351\"" Apr 17 02:41:12.291469 containerd[1607]: time="2026-04-17T02:41:12.291353191Z" level=info msg="connecting to shim 31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351" address="unix:///run/containerd/s/82913b0dcd5d2c1c902609f12cb8d07be7e6dc4f1bb58210c3191926b58eca19" protocol=ttrpc version=3 Apr 17 02:41:12.327119 systemd[1]: Started cri-containerd-31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351.scope - libcontainer container 31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351. Apr 17 02:41:12.483647 containerd[1607]: time="2026-04-17T02:41:12.482815227Z" level=info msg="StartContainer for \"31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351\" returns successfully" Apr 17 02:41:12.536743 systemd[1]: cri-containerd-31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351.scope: Deactivated successfully. Apr 17 02:41:12.551376 containerd[1607]: time="2026-04-17T02:41:12.551218303Z" level=info msg="received container exit event container_id:\"31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351\" id:\"31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351\" pid:3600 exited_at:{seconds:1776393672 nanos:542327519}" Apr 17 02:41:12.596705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31c498f95b05de781e870ce5c6708262ffde559d07499efd9d0b1ea37d2bb351-rootfs.mount: Deactivated successfully. Apr 17 02:41:13.509519 containerd[1607]: time="2026-04-17T02:41:13.509429143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 02:41:14.043913 kubelet[2774]: E0417 02:41:14.043623 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:16.043896 kubelet[2774]: E0417 02:41:16.043720 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:17.693545 containerd[1607]: time="2026-04-17T02:41:17.693425051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:17.694490 containerd[1607]: time="2026-04-17T02:41:17.693965698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 02:41:17.695038 containerd[1607]: time="2026-04-17T02:41:17.694997436Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:17.696984 containerd[1607]: time="2026-04-17T02:41:17.696908558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:17.697652 containerd[1607]: time="2026-04-17T02:41:17.697610216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.188108366s" Apr 17 02:41:17.697652 containerd[1607]: time="2026-04-17T02:41:17.697644545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 02:41:17.708868 containerd[1607]: time="2026-04-17T02:41:17.708615667Z" level=info msg="CreateContainer within sandbox \"016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 02:41:17.721735 containerd[1607]: time="2026-04-17T02:41:17.721621492Z" level=info msg="Container 37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:17.747578 containerd[1607]: time="2026-04-17T02:41:17.747486857Z" level=info msg="CreateContainer within sandbox \"016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01\"" Apr 17 02:41:17.750026 containerd[1607]: time="2026-04-17T02:41:17.749994195Z" level=info msg="StartContainer for \"37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01\"" Apr 17 02:41:17.752019 containerd[1607]: time="2026-04-17T02:41:17.751605970Z" level=info msg="connecting to shim 37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01" address="unix:///run/containerd/s/82913b0dcd5d2c1c902609f12cb8d07be7e6dc4f1bb58210c3191926b58eca19" protocol=ttrpc version=3 Apr 17 02:41:17.788166 systemd[1]: Started cri-containerd-37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01.scope - libcontainer container 37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01. Apr 17 02:41:17.903353 containerd[1607]: time="2026-04-17T02:41:17.903024446Z" level=info msg="StartContainer for \"37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01\" returns successfully" Apr 17 02:41:18.051412 kubelet[2774]: E0417 02:41:18.048657 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:18.736403 systemd[1]: cri-containerd-37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01.scope: Deactivated successfully. Apr 17 02:41:18.737022 systemd[1]: cri-containerd-37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01.scope: Consumed 919ms CPU time, 180.2M memory peak, 3.3M read from disk, 177M written to disk. Apr 17 02:41:18.778033 containerd[1607]: time="2026-04-17T02:41:18.777797958Z" level=info msg="received container exit event container_id:\"37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01\" id:\"37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01\" pid:3662 exited_at:{seconds:1776393678 nanos:775084752}" Apr 17 02:41:18.794039 kubelet[2774]: I0417 02:41:18.793706 2774 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 02:41:18.905567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37cd176c0b8148fa62902b3508d8b36974015dd47772a1276d417b400da40e01-rootfs.mount: Deactivated successfully. Apr 17 02:41:18.956110 systemd[1]: Created slice kubepods-besteffort-podfc05bc4d_2ba5_414a_a341_24efa8a02e41.slice - libcontainer container kubepods-besteffort-podfc05bc4d_2ba5_414a_a341_24efa8a02e41.slice. Apr 17 02:41:18.984223 systemd[1]: Created slice kubepods-besteffort-pod9fdbdd91_ac27_4de1_a7d9_71f179ae5c6b.slice - libcontainer container kubepods-besteffort-pod9fdbdd91_ac27_4de1_a7d9_71f179ae5c6b.slice. Apr 17 02:41:18.994043 systemd[1]: Created slice kubepods-besteffort-pod0212b919_43a9_472f_805b_f3b9834353cb.slice - libcontainer container kubepods-besteffort-pod0212b919_43a9_472f_805b_f3b9834353cb.slice. Apr 17 02:41:19.002758 systemd[1]: Created slice kubepods-besteffort-poddbe53259_1452_47c0_bed4_3b7fff050c7f.slice - libcontainer container kubepods-besteffort-poddbe53259_1452_47c0_bed4_3b7fff050c7f.slice. Apr 17 02:41:19.009119 kubelet[2774]: I0417 02:41:19.008565 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jprbq\" (UniqueName: \"kubernetes.io/projected/a8c1eb92-3fc9-4407-927e-e756b7916bf9-kube-api-access-jprbq\") pod \"coredns-674b8bbfcf-mhttd\" (UID: \"a8c1eb92-3fc9-4407-927e-e756b7916bf9\") " pod="kube-system/coredns-674b8bbfcf-mhttd" Apr 17 02:41:19.011824 kubelet[2774]: I0417 02:41:19.011738 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dbe53259-1452-47c0-bed4-3b7fff050c7f-nginx-config\") pod \"whisker-5f4d5bc85d-mg274\" (UID: \"dbe53259-1452-47c0-bed4-3b7fff050c7f\") " pod="calico-system/whisker-5f4d5bc85d-mg274" Apr 17 02:41:19.011824 kubelet[2774]: I0417 02:41:19.011866 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46xwx\" (UniqueName: \"kubernetes.io/projected/dbe53259-1452-47c0-bed4-3b7fff050c7f-kube-api-access-46xwx\") pod \"whisker-5f4d5bc85d-mg274\" (UID: \"dbe53259-1452-47c0-bed4-3b7fff050c7f\") " pod="calico-system/whisker-5f4d5bc85d-mg274" Apr 17 02:41:19.012130 kubelet[2774]: I0417 02:41:19.011887 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0212b919-43a9-472f-805b-f3b9834353cb-calico-apiserver-certs\") pod \"calico-apiserver-6f74589f89-kxd42\" (UID: \"0212b919-43a9-472f-805b-f3b9834353cb\") " pod="calico-system/calico-apiserver-6f74589f89-kxd42" Apr 17 02:41:19.012758 systemd[1]: Created slice kubepods-besteffort-podffe1fc2f_f7d4_4a3c_b206_bd585e8d55b8.slice - libcontainer container kubepods-besteffort-podffe1fc2f_f7d4_4a3c_b206_bd585e8d55b8.slice. Apr 17 02:41:19.013508 kubelet[2774]: I0417 02:41:19.013476 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8c1eb92-3fc9-4407-927e-e756b7916bf9-config-volume\") pod \"coredns-674b8bbfcf-mhttd\" (UID: \"a8c1eb92-3fc9-4407-927e-e756b7916bf9\") " pod="kube-system/coredns-674b8bbfcf-mhttd" Apr 17 02:41:19.013560 kubelet[2774]: I0417 02:41:19.013522 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljz7b\" (UniqueName: \"kubernetes.io/projected/fc05bc4d-2ba5-414a-a341-24efa8a02e41-kube-api-access-ljz7b\") pod \"calico-apiserver-6f74589f89-p697l\" (UID: \"fc05bc4d-2ba5-414a-a341-24efa8a02e41\") " pod="calico-system/calico-apiserver-6f74589f89-p697l" Apr 17 02:41:19.013560 kubelet[2774]: I0417 02:41:19.013537 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b-tigera-ca-bundle\") pod \"calico-kube-controllers-78755f586c-62jds\" (UID: \"9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b\") " pod="calico-system/calico-kube-controllers-78755f586c-62jds" Apr 17 02:41:19.013560 kubelet[2774]: I0417 02:41:19.013550 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26beccb7-c020-4ba0-81e6-6a7d7a39dc14-config-volume\") pod \"coredns-674b8bbfcf-2xrj2\" (UID: \"26beccb7-c020-4ba0-81e6-6a7d7a39dc14\") " pod="kube-system/coredns-674b8bbfcf-2xrj2" Apr 17 02:41:19.013630 kubelet[2774]: I0417 02:41:19.013563 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jh9\" (UniqueName: \"kubernetes.io/projected/ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8-kube-api-access-s5jh9\") pod \"goldmane-5b85766d88-rq6rz\" (UID: \"ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8\") " pod="calico-system/goldmane-5b85766d88-rq6rz" Apr 17 02:41:19.013630 kubelet[2774]: I0417 02:41:19.013575 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbe53259-1452-47c0-bed4-3b7fff050c7f-whisker-backend-key-pair\") pod \"whisker-5f4d5bc85d-mg274\" (UID: \"dbe53259-1452-47c0-bed4-3b7fff050c7f\") " pod="calico-system/whisker-5f4d5bc85d-mg274" Apr 17 02:41:19.013630 kubelet[2774]: I0417 02:41:19.013592 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwllt\" (UniqueName: \"kubernetes.io/projected/26beccb7-c020-4ba0-81e6-6a7d7a39dc14-kube-api-access-gwllt\") pod \"coredns-674b8bbfcf-2xrj2\" (UID: \"26beccb7-c020-4ba0-81e6-6a7d7a39dc14\") " pod="kube-system/coredns-674b8bbfcf-2xrj2" Apr 17 02:41:19.013716 kubelet[2774]: I0417 02:41:19.013649 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8-config\") pod \"goldmane-5b85766d88-rq6rz\" (UID: \"ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8\") " pod="calico-system/goldmane-5b85766d88-rq6rz" Apr 17 02:41:19.013716 kubelet[2774]: I0417 02:41:19.013661 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-rq6rz\" (UID: \"ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8\") " pod="calico-system/goldmane-5b85766d88-rq6rz" Apr 17 02:41:19.013758 kubelet[2774]: I0417 02:41:19.013697 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8-goldmane-key-pair\") pod \"goldmane-5b85766d88-rq6rz\" (UID: \"ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8\") " pod="calico-system/goldmane-5b85766d88-rq6rz" Apr 17 02:41:19.013758 kubelet[2774]: I0417 02:41:19.013745 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbe53259-1452-47c0-bed4-3b7fff050c7f-whisker-ca-bundle\") pod \"whisker-5f4d5bc85d-mg274\" (UID: \"dbe53259-1452-47c0-bed4-3b7fff050c7f\") " pod="calico-system/whisker-5f4d5bc85d-mg274" Apr 17 02:41:19.013834 kubelet[2774]: I0417 02:41:19.013762 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpcjn\" (UniqueName: \"kubernetes.io/projected/9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b-kube-api-access-bpcjn\") pod \"calico-kube-controllers-78755f586c-62jds\" (UID: \"9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b\") " pod="calico-system/calico-kube-controllers-78755f586c-62jds" Apr 17 02:41:19.013834 kubelet[2774]: I0417 02:41:19.013774 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fc05bc4d-2ba5-414a-a341-24efa8a02e41-calico-apiserver-certs\") pod \"calico-apiserver-6f74589f89-p697l\" (UID: \"fc05bc4d-2ba5-414a-a341-24efa8a02e41\") " pod="calico-system/calico-apiserver-6f74589f89-p697l" Apr 17 02:41:19.013834 kubelet[2774]: I0417 02:41:19.013786 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvndn\" (UniqueName: \"kubernetes.io/projected/0212b919-43a9-472f-805b-f3b9834353cb-kube-api-access-mvndn\") pod \"calico-apiserver-6f74589f89-kxd42\" (UID: \"0212b919-43a9-472f-805b-f3b9834353cb\") " pod="calico-system/calico-apiserver-6f74589f89-kxd42" Apr 17 02:41:19.023076 systemd[1]: Created slice kubepods-burstable-poda8c1eb92_3fc9_4407_927e_e756b7916bf9.slice - libcontainer container kubepods-burstable-poda8c1eb92_3fc9_4407_927e_e756b7916bf9.slice. Apr 17 02:41:19.031373 systemd[1]: Created slice kubepods-burstable-pod26beccb7_c020_4ba0_81e6_6a7d7a39dc14.slice - libcontainer container kubepods-burstable-pod26beccb7_c020_4ba0_81e6_6a7d7a39dc14.slice. Apr 17 02:41:19.284026 containerd[1607]: time="2026-04-17T02:41:19.283480689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f74589f89-p697l,Uid:fc05bc4d-2ba5-414a-a341-24efa8a02e41,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:19.300031 containerd[1607]: time="2026-04-17T02:41:19.299909242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78755f586c-62jds,Uid:9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:19.300031 containerd[1607]: time="2026-04-17T02:41:19.299966367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f74589f89-kxd42,Uid:0212b919-43a9-472f-805b-f3b9834353cb,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:19.306579 containerd[1607]: time="2026-04-17T02:41:19.306536468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4d5bc85d-mg274,Uid:dbe53259-1452-47c0-bed4-3b7fff050c7f,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:19.326972 containerd[1607]: time="2026-04-17T02:41:19.326742028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rq6rz,Uid:ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:19.330473 kubelet[2774]: E0417 02:41:19.330411 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:19.331163 containerd[1607]: time="2026-04-17T02:41:19.331015060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mhttd,Uid:a8c1eb92-3fc9-4407-927e-e756b7916bf9,Namespace:kube-system,Attempt:0,}" Apr 17 02:41:19.339980 kubelet[2774]: E0417 02:41:19.339645 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:19.341526 containerd[1607]: time="2026-04-17T02:41:19.341385404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2xrj2,Uid:26beccb7-c020-4ba0-81e6-6a7d7a39dc14,Namespace:kube-system,Attempt:0,}" Apr 17 02:41:19.566749 containerd[1607]: time="2026-04-17T02:41:19.566366250Z" level=error msg="Failed to destroy network for sandbox \"9df928f143d24a282a9ed55e543baf95b313bd4a8d8e37c109ddc449853d0c89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.571373 containerd[1607]: time="2026-04-17T02:41:19.571199443Z" level=error msg="Failed to destroy network for sandbox \"2324fbf5819c708defa516bee283e1a748df62506ca863af3f6320e2472170f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.571584 containerd[1607]: time="2026-04-17T02:41:19.571549488Z" level=error msg="Failed to destroy network for sandbox \"1d67124d6eaddba6f59fe62b83189ee3cae9a007bd5ad2d13181a844fd335a1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.573126 containerd[1607]: time="2026-04-17T02:41:19.573095714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mhttd,Uid:a8c1eb92-3fc9-4407-927e-e756b7916bf9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df928f143d24a282a9ed55e543baf95b313bd4a8d8e37c109ddc449853d0c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.576513 containerd[1607]: time="2026-04-17T02:41:19.576411428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4d5bc85d-mg274,Uid:dbe53259-1452-47c0-bed4-3b7fff050c7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2324fbf5819c708defa516bee283e1a748df62506ca863af3f6320e2472170f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.581991 containerd[1607]: time="2026-04-17T02:41:19.581733321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78755f586c-62jds,Uid:9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d67124d6eaddba6f59fe62b83189ee3cae9a007bd5ad2d13181a844fd335a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.581991 containerd[1607]: time="2026-04-17T02:41:19.581904515Z" level=error msg="Failed to destroy network for sandbox \"3bd17f452262f4d3d3bb352fa27f2582a61763d359333cca918181d8a6776b77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.586075 kubelet[2774]: E0417 02:41:19.584457 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df928f143d24a282a9ed55e543baf95b313bd4a8d8e37c109ddc449853d0c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.586075 kubelet[2774]: E0417 02:41:19.585228 2774 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df928f143d24a282a9ed55e543baf95b313bd4a8d8e37c109ddc449853d0c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mhttd" Apr 17 02:41:19.586075 kubelet[2774]: E0417 02:41:19.585249 2774 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df928f143d24a282a9ed55e543baf95b313bd4a8d8e37c109ddc449853d0c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mhttd" Apr 17 02:41:19.588548 kubelet[2774]: E0417 02:41:19.585455 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mhttd_kube-system(a8c1eb92-3fc9-4407-927e-e756b7916bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mhttd_kube-system(a8c1eb92-3fc9-4407-927e-e756b7916bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9df928f143d24a282a9ed55e543baf95b313bd4a8d8e37c109ddc449853d0c89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mhttd" podUID="a8c1eb92-3fc9-4407-927e-e756b7916bf9" Apr 17 02:41:19.588548 kubelet[2774]: E0417 02:41:19.584776 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2324fbf5819c708defa516bee283e1a748df62506ca863af3f6320e2472170f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.588548 kubelet[2774]: E0417 02:41:19.585576 2774 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2324fbf5819c708defa516bee283e1a748df62506ca863af3f6320e2472170f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4d5bc85d-mg274" Apr 17 02:41:19.588775 kubelet[2774]: E0417 02:41:19.585595 2774 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2324fbf5819c708defa516bee283e1a748df62506ca863af3f6320e2472170f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4d5bc85d-mg274" Apr 17 02:41:19.588775 kubelet[2774]: E0417 02:41:19.585709 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f4d5bc85d-mg274_calico-system(dbe53259-1452-47c0-bed4-3b7fff050c7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f4d5bc85d-mg274_calico-system(dbe53259-1452-47c0-bed4-3b7fff050c7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2324fbf5819c708defa516bee283e1a748df62506ca863af3f6320e2472170f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f4d5bc85d-mg274" podUID="dbe53259-1452-47c0-bed4-3b7fff050c7f" Apr 17 02:41:19.591112 kubelet[2774]: E0417 02:41:19.589255 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d67124d6eaddba6f59fe62b83189ee3cae9a007bd5ad2d13181a844fd335a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.591112 kubelet[2774]: E0417 02:41:19.589654 2774 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d67124d6eaddba6f59fe62b83189ee3cae9a007bd5ad2d13181a844fd335a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78755f586c-62jds" Apr 17 02:41:19.591112 kubelet[2774]: E0417 02:41:19.589756 2774 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d67124d6eaddba6f59fe62b83189ee3cae9a007bd5ad2d13181a844fd335a1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78755f586c-62jds" Apr 17 02:41:19.592609 containerd[1607]: time="2026-04-17T02:41:19.589389302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rq6rz,Uid:ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd17f452262f4d3d3bb352fa27f2582a61763d359333cca918181d8a6776b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.595672 kubelet[2774]: E0417 02:41:19.589995 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78755f586c-62jds_calico-system(9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78755f586c-62jds_calico-system(9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d67124d6eaddba6f59fe62b83189ee3cae9a007bd5ad2d13181a844fd335a1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78755f586c-62jds" podUID="9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b" Apr 17 02:41:19.605634 kubelet[2774]: E0417 02:41:19.604471 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd17f452262f4d3d3bb352fa27f2582a61763d359333cca918181d8a6776b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.605634 kubelet[2774]: E0417 02:41:19.605351 2774 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd17f452262f4d3d3bb352fa27f2582a61763d359333cca918181d8a6776b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-rq6rz" Apr 17 02:41:19.605634 kubelet[2774]: E0417 02:41:19.605397 2774 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd17f452262f4d3d3bb352fa27f2582a61763d359333cca918181d8a6776b77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-rq6rz" Apr 17 02:41:19.607017 kubelet[2774]: E0417 02:41:19.606993 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-rq6rz_calico-system(ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-rq6rz_calico-system(ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bd17f452262f4d3d3bb352fa27f2582a61763d359333cca918181d8a6776b77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-rq6rz" podUID="ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8" Apr 17 02:41:19.628074 containerd[1607]: time="2026-04-17T02:41:19.627916335Z" level=error msg="Failed to destroy network for sandbox \"3e186526c3e6c07d7e4b6de123a713384703f5ed62a889f26a0f191291348263\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.630337 containerd[1607]: time="2026-04-17T02:41:19.630263390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f74589f89-p697l,Uid:fc05bc4d-2ba5-414a-a341-24efa8a02e41,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e186526c3e6c07d7e4b6de123a713384703f5ed62a889f26a0f191291348263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.630738 kubelet[2774]: E0417 02:41:19.630587 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e186526c3e6c07d7e4b6de123a713384703f5ed62a889f26a0f191291348263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.630738 kubelet[2774]: E0417 02:41:19.630638 2774 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e186526c3e6c07d7e4b6de123a713384703f5ed62a889f26a0f191291348263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f74589f89-p697l" Apr 17 02:41:19.630738 kubelet[2774]: E0417 02:41:19.630653 2774 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e186526c3e6c07d7e4b6de123a713384703f5ed62a889f26a0f191291348263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f74589f89-p697l" Apr 17 02:41:19.630881 kubelet[2774]: E0417 02:41:19.630740 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f74589f89-p697l_calico-system(fc05bc4d-2ba5-414a-a341-24efa8a02e41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f74589f89-p697l_calico-system(fc05bc4d-2ba5-414a-a341-24efa8a02e41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e186526c3e6c07d7e4b6de123a713384703f5ed62a889f26a0f191291348263\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f74589f89-p697l" podUID="fc05bc4d-2ba5-414a-a341-24efa8a02e41" Apr 17 02:41:19.642443 containerd[1607]: time="2026-04-17T02:41:19.642381290Z" level=info msg="CreateContainer within sandbox \"016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 02:41:19.644904 containerd[1607]: time="2026-04-17T02:41:19.644842640Z" level=error msg="Failed to destroy network for sandbox \"15924961371abcee1ed8f913ad3381ace557950ba187d9bd2761cf3b1a6e6f86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.649857 containerd[1607]: time="2026-04-17T02:41:19.649820007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f74589f89-kxd42,Uid:0212b919-43a9-472f-805b-f3b9834353cb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15924961371abcee1ed8f913ad3381ace557950ba187d9bd2761cf3b1a6e6f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.650082 kubelet[2774]: E0417 02:41:19.650054 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15924961371abcee1ed8f913ad3381ace557950ba187d9bd2761cf3b1a6e6f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.650134 kubelet[2774]: E0417 02:41:19.650095 2774 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15924961371abcee1ed8f913ad3381ace557950ba187d9bd2761cf3b1a6e6f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f74589f89-kxd42" Apr 17 02:41:19.650134 kubelet[2774]: E0417 02:41:19.650110 2774 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15924961371abcee1ed8f913ad3381ace557950ba187d9bd2761cf3b1a6e6f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f74589f89-kxd42" Apr 17 02:41:19.652170 kubelet[2774]: E0417 02:41:19.651209 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f74589f89-kxd42_calico-system(0212b919-43a9-472f-805b-f3b9834353cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f74589f89-kxd42_calico-system(0212b919-43a9-472f-805b-f3b9834353cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15924961371abcee1ed8f913ad3381ace557950ba187d9bd2761cf3b1a6e6f86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f74589f89-kxd42" podUID="0212b919-43a9-472f-805b-f3b9834353cb" Apr 17 02:41:19.687430 containerd[1607]: time="2026-04-17T02:41:19.672283559Z" level=error msg="Failed to destroy network for sandbox \"9357ff66e9d6ddb4e7c4dc36b5dcbf0f35442102f22ecb11995b51473a0c51bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.688109 containerd[1607]: time="2026-04-17T02:41:19.673891351Z" level=info msg="Container d81bde78a295a595f0787be2a74975fc9953235918eed15d22037a1080ed2f6c: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:19.689387 containerd[1607]: time="2026-04-17T02:41:19.689322768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2xrj2,Uid:26beccb7-c020-4ba0-81e6-6a7d7a39dc14,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9357ff66e9d6ddb4e7c4dc36b5dcbf0f35442102f22ecb11995b51473a0c51bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.691132 kubelet[2774]: E0417 02:41:19.690802 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9357ff66e9d6ddb4e7c4dc36b5dcbf0f35442102f22ecb11995b51473a0c51bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:19.691716 kubelet[2774]: E0417 02:41:19.691276 2774 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9357ff66e9d6ddb4e7c4dc36b5dcbf0f35442102f22ecb11995b51473a0c51bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2xrj2" Apr 17 02:41:19.691716 kubelet[2774]: E0417 02:41:19.691302 2774 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9357ff66e9d6ddb4e7c4dc36b5dcbf0f35442102f22ecb11995b51473a0c51bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2xrj2" Apr 17 02:41:19.691767 kubelet[2774]: E0417 02:41:19.691701 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2xrj2_kube-system(26beccb7-c020-4ba0-81e6-6a7d7a39dc14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2xrj2_kube-system(26beccb7-c020-4ba0-81e6-6a7d7a39dc14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9357ff66e9d6ddb4e7c4dc36b5dcbf0f35442102f22ecb11995b51473a0c51bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2xrj2" podUID="26beccb7-c020-4ba0-81e6-6a7d7a39dc14" Apr 17 02:41:19.721247 containerd[1607]: time="2026-04-17T02:41:19.721140010Z" level=info msg="CreateContainer within sandbox \"016ad9b200b272910e4d9d6c5674cd30081133f75bae8349e244b1c4adf05029\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d81bde78a295a595f0787be2a74975fc9953235918eed15d22037a1080ed2f6c\"" Apr 17 02:41:19.723492 containerd[1607]: time="2026-04-17T02:41:19.723460844Z" level=info msg="StartContainer for \"d81bde78a295a595f0787be2a74975fc9953235918eed15d22037a1080ed2f6c\"" Apr 17 02:41:19.727119 containerd[1607]: time="2026-04-17T02:41:19.726969075Z" level=info msg="connecting to shim d81bde78a295a595f0787be2a74975fc9953235918eed15d22037a1080ed2f6c" address="unix:///run/containerd/s/82913b0dcd5d2c1c902609f12cb8d07be7e6dc4f1bb58210c3191926b58eca19" protocol=ttrpc version=3 Apr 17 02:41:19.747290 systemd[1]: Started cri-containerd-d81bde78a295a595f0787be2a74975fc9953235918eed15d22037a1080ed2f6c.scope - libcontainer container d81bde78a295a595f0787be2a74975fc9953235918eed15d22037a1080ed2f6c. Apr 17 02:41:19.834704 containerd[1607]: time="2026-04-17T02:41:19.834569231Z" level=info msg="StartContainer for \"d81bde78a295a595f0787be2a74975fc9953235918eed15d22037a1080ed2f6c\" returns successfully" Apr 17 02:41:20.070406 systemd[1]: Created slice kubepods-besteffort-pod7195f3e9_744b_4ed0_a1cb_10c30647c614.slice - libcontainer container kubepods-besteffort-pod7195f3e9_744b_4ed0_a1cb_10c30647c614.slice. Apr 17 02:41:20.081342 containerd[1607]: time="2026-04-17T02:41:20.081246311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kt7f,Uid:7195f3e9-744b-4ed0-a1cb-10c30647c614,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:20.220819 containerd[1607]: time="2026-04-17T02:41:20.220464681Z" level=error msg="Failed to destroy network for sandbox \"bf612a7e81fd2fc843b77944e6492a2522db8edfc3b9d8343bf50e9c96b76467\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:20.224913 systemd[1]: run-netns-cni\x2dfeafe32e\x2d1932\x2d8a99\x2dd1b4\x2d90b2bc986582.mount: Deactivated successfully. Apr 17 02:41:20.235730 containerd[1607]: time="2026-04-17T02:41:20.235491812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kt7f,Uid:7195f3e9-744b-4ed0-a1cb-10c30647c614,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf612a7e81fd2fc843b77944e6492a2522db8edfc3b9d8343bf50e9c96b76467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:20.237264 kubelet[2774]: E0417 02:41:20.236909 2774 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf612a7e81fd2fc843b77944e6492a2522db8edfc3b9d8343bf50e9c96b76467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 02:41:20.237477 kubelet[2774]: E0417 02:41:20.237439 2774 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf612a7e81fd2fc843b77944e6492a2522db8edfc3b9d8343bf50e9c96b76467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5kt7f" Apr 17 02:41:20.237555 kubelet[2774]: E0417 02:41:20.237516 2774 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf612a7e81fd2fc843b77944e6492a2522db8edfc3b9d8343bf50e9c96b76467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5kt7f" Apr 17 02:41:20.237904 kubelet[2774]: E0417 02:41:20.237755 2774 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5kt7f_calico-system(7195f3e9-744b-4ed0-a1cb-10c30647c614)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5kt7f_calico-system(7195f3e9-744b-4ed0-a1cb-10c30647c614)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf612a7e81fd2fc843b77944e6492a2522db8edfc3b9d8343bf50e9c96b76467\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5kt7f" podUID="7195f3e9-744b-4ed0-a1cb-10c30647c614" Apr 17 02:41:20.800006 kubelet[2774]: I0417 02:41:20.799900 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbe53259-1452-47c0-bed4-3b7fff050c7f-whisker-backend-key-pair\") pod \"dbe53259-1452-47c0-bed4-3b7fff050c7f\" (UID: \"dbe53259-1452-47c0-bed4-3b7fff050c7f\") " Apr 17 02:41:20.800006 kubelet[2774]: I0417 02:41:20.800042 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dbe53259-1452-47c0-bed4-3b7fff050c7f-nginx-config\") pod \"dbe53259-1452-47c0-bed4-3b7fff050c7f\" (UID: \"dbe53259-1452-47c0-bed4-3b7fff050c7f\") " Apr 17 02:41:20.800006 kubelet[2774]: I0417 02:41:20.800074 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46xwx\" (UniqueName: \"kubernetes.io/projected/dbe53259-1452-47c0-bed4-3b7fff050c7f-kube-api-access-46xwx\") pod \"dbe53259-1452-47c0-bed4-3b7fff050c7f\" (UID: \"dbe53259-1452-47c0-bed4-3b7fff050c7f\") " Apr 17 02:41:20.800006 kubelet[2774]: I0417 02:41:20.800097 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbe53259-1452-47c0-bed4-3b7fff050c7f-whisker-ca-bundle\") pod \"dbe53259-1452-47c0-bed4-3b7fff050c7f\" (UID: \"dbe53259-1452-47c0-bed4-3b7fff050c7f\") " Apr 17 02:41:20.810719 kubelet[2774]: I0417 02:41:20.810516 2774 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe53259-1452-47c0-bed4-3b7fff050c7f-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "dbe53259-1452-47c0-bed4-3b7fff050c7f" (UID: "dbe53259-1452-47c0-bed4-3b7fff050c7f"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 02:41:20.812603 kubelet[2774]: I0417 02:41:20.812474 2774 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe53259-1452-47c0-bed4-3b7fff050c7f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dbe53259-1452-47c0-bed4-3b7fff050c7f" (UID: "dbe53259-1452-47c0-bed4-3b7fff050c7f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 02:41:20.818772 systemd[1]: var-lib-kubelet-pods-dbe53259\x2d1452\x2d47c0\x2dbed4\x2d3b7fff050c7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d46xwx.mount: Deactivated successfully. Apr 17 02:41:20.820746 kubelet[2774]: I0417 02:41:20.820119 2774 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe53259-1452-47c0-bed4-3b7fff050c7f-kube-api-access-46xwx" (OuterVolumeSpecName: "kube-api-access-46xwx") pod "dbe53259-1452-47c0-bed4-3b7fff050c7f" (UID: "dbe53259-1452-47c0-bed4-3b7fff050c7f"). InnerVolumeSpecName "kube-api-access-46xwx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 02:41:20.821639 systemd[1]: var-lib-kubelet-pods-dbe53259\x2d1452\x2d47c0\x2dbed4\x2d3b7fff050c7f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 02:41:20.824389 kubelet[2774]: I0417 02:41:20.824329 2774 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe53259-1452-47c0-bed4-3b7fff050c7f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dbe53259-1452-47c0-bed4-3b7fff050c7f" (UID: "dbe53259-1452-47c0-bed4-3b7fff050c7f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 02:41:20.901574 kubelet[2774]: I0417 02:41:20.901436 2774 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbe53259-1452-47c0-bed4-3b7fff050c7f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 17 02:41:20.901574 kubelet[2774]: I0417 02:41:20.901544 2774 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dbe53259-1452-47c0-bed4-3b7fff050c7f-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 17 02:41:20.901574 kubelet[2774]: I0417 02:41:20.901557 2774 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-46xwx\" (UniqueName: \"kubernetes.io/projected/dbe53259-1452-47c0-bed4-3b7fff050c7f-kube-api-access-46xwx\") on node \"localhost\" DevicePath \"\"" Apr 17 02:41:20.901574 kubelet[2774]: I0417 02:41:20.901568 2774 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbe53259-1452-47c0-bed4-3b7fff050c7f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 17 02:41:21.061475 systemd[1]: Removed slice kubepods-besteffort-poddbe53259_1452_47c0_bed4_3b7fff050c7f.slice - libcontainer container kubepods-besteffort-poddbe53259_1452_47c0_bed4_3b7fff050c7f.slice. Apr 17 02:41:21.717757 kubelet[2774]: I0417 02:41:21.717596 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 02:41:21.754887 kubelet[2774]: I0417 02:41:21.754646 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7rfvn" podStartSLOduration=5.707990867 podStartE2EDuration="26.75458871s" podCreationTimestamp="2026-04-17 02:40:55 +0000 UTC" firstStartedPulling="2026-04-17 02:40:56.65286911 +0000 UTC m=+19.749987301" lastFinishedPulling="2026-04-17 02:41:17.699466965 +0000 UTC m=+40.796585144" observedRunningTime="2026-04-17 02:41:20.800281818 +0000 UTC m=+43.897400010" watchObservedRunningTime="2026-04-17 02:41:21.75458871 +0000 UTC m=+44.851706893" Apr 17 02:41:21.952010 kubelet[2774]: I0417 02:41:21.949677 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3cbdd891-4841-489d-93e5-6b6c5685cec2-whisker-backend-key-pair\") pod \"whisker-6c9d84799f-nm4zz\" (UID: \"3cbdd891-4841-489d-93e5-6b6c5685cec2\") " pod="calico-system/whisker-6c9d84799f-nm4zz" Apr 17 02:41:21.952010 kubelet[2774]: I0417 02:41:21.950332 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpbpl\" (UniqueName: \"kubernetes.io/projected/3cbdd891-4841-489d-93e5-6b6c5685cec2-kube-api-access-vpbpl\") pod \"whisker-6c9d84799f-nm4zz\" (UID: \"3cbdd891-4841-489d-93e5-6b6c5685cec2\") " pod="calico-system/whisker-6c9d84799f-nm4zz" Apr 17 02:41:21.952010 kubelet[2774]: I0417 02:41:21.950409 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cbdd891-4841-489d-93e5-6b6c5685cec2-whisker-ca-bundle\") pod \"whisker-6c9d84799f-nm4zz\" (UID: \"3cbdd891-4841-489d-93e5-6b6c5685cec2\") " pod="calico-system/whisker-6c9d84799f-nm4zz" Apr 17 02:41:21.952010 kubelet[2774]: I0417 02:41:21.951241 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3cbdd891-4841-489d-93e5-6b6c5685cec2-nginx-config\") pod \"whisker-6c9d84799f-nm4zz\" (UID: \"3cbdd891-4841-489d-93e5-6b6c5685cec2\") " pod="calico-system/whisker-6c9d84799f-nm4zz" Apr 17 02:41:21.955640 systemd[1]: Created slice kubepods-besteffort-pod3cbdd891_4841_489d_93e5_6b6c5685cec2.slice - libcontainer container kubepods-besteffort-pod3cbdd891_4841_489d_93e5_6b6c5685cec2.slice. Apr 17 02:41:22.299613 containerd[1607]: time="2026-04-17T02:41:22.299192277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c9d84799f-nm4zz,Uid:3cbdd891-4841-489d-93e5-6b6c5685cec2,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:23.081812 kubelet[2774]: I0417 02:41:23.081543 2774 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe53259-1452-47c0-bed4-3b7fff050c7f" path="/var/lib/kubelet/pods/dbe53259-1452-47c0-bed4-3b7fff050c7f/volumes" Apr 17 02:41:23.305664 systemd-networkd[1526]: cali6ca3bbab71d: Link UP Apr 17 02:41:23.307054 systemd-networkd[1526]: cali6ca3bbab71d: Gained carrier Apr 17 02:41:23.358576 containerd[1607]: 2026-04-17 02:41:22.567 [ERROR][4134] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 02:41:23.358576 containerd[1607]: 2026-04-17 02:41:22.672 [INFO][4134] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c9d84799f--nm4zz-eth0 whisker-6c9d84799f- calico-system 3cbdd891-4841-489d-93e5-6b6c5685cec2 935 0 2026-04-17 02:41:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c9d84799f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c9d84799f-nm4zz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6ca3bbab71d [] [] }} ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Namespace="calico-system" Pod="whisker-6c9d84799f-nm4zz" WorkloadEndpoint="localhost-k8s-whisker--6c9d84799f--nm4zz-" Apr 17 02:41:23.358576 containerd[1607]: 2026-04-17 02:41:22.672 [INFO][4134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Namespace="calico-system" Pod="whisker-6c9d84799f-nm4zz" WorkloadEndpoint="localhost-k8s-whisker--6c9d84799f--nm4zz-eth0" Apr 17 02:41:23.358576 containerd[1607]: 2026-04-17 02:41:22.822 [INFO][4146] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" HandleID="k8s-pod-network.bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Workload="localhost-k8s-whisker--6c9d84799f--nm4zz-eth0" Apr 17 02:41:23.364037 containerd[1607]: 2026-04-17 02:41:22.870 [INFO][4146] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" HandleID="k8s-pod-network.bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Workload="localhost-k8s-whisker--6c9d84799f--nm4zz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c9d84799f-nm4zz", "timestamp":"2026-04-17 02:41:22.822747747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000466000)} Apr 17 02:41:23.364037 containerd[1607]: 2026-04-17 02:41:22.870 [INFO][4146] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 02:41:23.364037 containerd[1607]: 2026-04-17 02:41:22.870 [INFO][4146] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 02:41:23.364037 containerd[1607]: 2026-04-17 02:41:22.870 [INFO][4146] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 02:41:23.364037 containerd[1607]: 2026-04-17 02:41:22.902 [INFO][4146] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" host="localhost" Apr 17 02:41:23.364037 containerd[1607]: 2026-04-17 02:41:22.937 [INFO][4146] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 02:41:23.364037 containerd[1607]: 2026-04-17 02:41:22.997 [INFO][4146] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 02:41:23.364037 containerd[1607]: 2026-04-17 02:41:23.011 [INFO][4146] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:23.364037 containerd[1607]: 2026-04-17 02:41:23.044 [INFO][4146] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:23.364037 containerd[1607]: 2026-04-17 02:41:23.044 [INFO][4146] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" host="localhost" Apr 17 02:41:23.365066 containerd[1607]: 2026-04-17 02:41:23.067 [INFO][4146] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45 Apr 17 02:41:23.365066 containerd[1607]: 2026-04-17 02:41:23.223 [INFO][4146] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" host="localhost" Apr 17 02:41:23.365066 containerd[1607]: 2026-04-17 02:41:23.282 [INFO][4146] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" host="localhost" Apr 17 02:41:23.365066 containerd[1607]: 2026-04-17 02:41:23.282 [INFO][4146] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" host="localhost" Apr 17 02:41:23.365066 containerd[1607]: 2026-04-17 02:41:23.282 [INFO][4146] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 02:41:23.365066 containerd[1607]: 2026-04-17 02:41:23.282 [INFO][4146] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" HandleID="k8s-pod-network.bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Workload="localhost-k8s-whisker--6c9d84799f--nm4zz-eth0" Apr 17 02:41:23.365232 containerd[1607]: 2026-04-17 02:41:23.286 [INFO][4134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Namespace="calico-system" Pod="whisker-6c9d84799f-nm4zz" WorkloadEndpoint="localhost-k8s-whisker--6c9d84799f--nm4zz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c9d84799f--nm4zz-eth0", GenerateName:"whisker-6c9d84799f-", Namespace:"calico-system", SelfLink:"", UID:"3cbdd891-4841-489d-93e5-6b6c5685cec2", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c9d84799f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c9d84799f-nm4zz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6ca3bbab71d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:23.365232 containerd[1607]: 2026-04-17 02:41:23.286 [INFO][4134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Namespace="calico-system" Pod="whisker-6c9d84799f-nm4zz" WorkloadEndpoint="localhost-k8s-whisker--6c9d84799f--nm4zz-eth0" Apr 17 02:41:23.365721 containerd[1607]: 2026-04-17 02:41:23.286 [INFO][4134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ca3bbab71d ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Namespace="calico-system" Pod="whisker-6c9d84799f-nm4zz" WorkloadEndpoint="localhost-k8s-whisker--6c9d84799f--nm4zz-eth0" Apr 17 02:41:23.365721 containerd[1607]: 2026-04-17 02:41:23.307 [INFO][4134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Namespace="calico-system" Pod="whisker-6c9d84799f-nm4zz" WorkloadEndpoint="localhost-k8s-whisker--6c9d84799f--nm4zz-eth0" Apr 17 02:41:23.365978 containerd[1607]: 2026-04-17 02:41:23.307 [INFO][4134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Namespace="calico-system" Pod="whisker-6c9d84799f-nm4zz" WorkloadEndpoint="localhost-k8s-whisker--6c9d84799f--nm4zz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c9d84799f--nm4zz-eth0", GenerateName:"whisker-6c9d84799f-", Namespace:"calico-system", SelfLink:"", UID:"3cbdd891-4841-489d-93e5-6b6c5685cec2", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c9d84799f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45", Pod:"whisker-6c9d84799f-nm4zz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6ca3bbab71d", MAC:"6a:07:15:e7:47:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:23.366080 containerd[1607]: 2026-04-17 02:41:23.339 [INFO][4134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" Namespace="calico-system" Pod="whisker-6c9d84799f-nm4zz" WorkloadEndpoint="localhost-k8s-whisker--6c9d84799f--nm4zz-eth0" Apr 17 02:41:23.605329 containerd[1607]: time="2026-04-17T02:41:23.604902288Z" level=info msg="connecting to shim bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45" address="unix:///run/containerd/s/b6dc04105ebde92b4a95feb5f2ead1b2926e37cc6438fb94587dc4c8a8f71366" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:41:23.661411 systemd[1]: Started cri-containerd-bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45.scope - libcontainer container bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45. Apr 17 02:41:23.702395 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 02:41:23.749043 containerd[1607]: time="2026-04-17T02:41:23.748997026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c9d84799f-nm4zz,Uid:3cbdd891-4841-489d-93e5-6b6c5685cec2,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45\"" Apr 17 02:41:23.785993 containerd[1607]: time="2026-04-17T02:41:23.785623169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 02:41:23.839762 systemd-networkd[1526]: vxlan.calico: Link UP Apr 17 02:41:23.839768 systemd-networkd[1526]: vxlan.calico: Gained carrier Apr 17 02:41:25.295649 systemd-networkd[1526]: cali6ca3bbab71d: Gained IPv6LL Apr 17 02:41:25.550337 systemd-networkd[1526]: vxlan.calico: Gained IPv6LL Apr 17 02:41:25.870668 containerd[1607]: time="2026-04-17T02:41:25.870431284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:25.874304 containerd[1607]: time="2026-04-17T02:41:25.872782020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 02:41:25.874585 containerd[1607]: time="2026-04-17T02:41:25.874489602Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:25.895973 containerd[1607]: time="2026-04-17T02:41:25.895609197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:25.896906 containerd[1607]: time="2026-04-17T02:41:25.896845577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.111022204s" Apr 17 02:41:25.896983 containerd[1607]: time="2026-04-17T02:41:25.896911105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 02:41:25.915327 containerd[1607]: time="2026-04-17T02:41:25.914888631Z" level=info msg="CreateContainer within sandbox \"bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 02:41:25.932832 containerd[1607]: time="2026-04-17T02:41:25.932681164Z" level=info msg="Container ba543dc0385ea49dd90eff52609cc1c6bd7e1382576244a36619ed51157cd937: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:25.958857 containerd[1607]: time="2026-04-17T02:41:25.958776280Z" level=info msg="CreateContainer within sandbox \"bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ba543dc0385ea49dd90eff52609cc1c6bd7e1382576244a36619ed51157cd937\"" Apr 17 02:41:25.960911 containerd[1607]: time="2026-04-17T02:41:25.960877306Z" level=info msg="StartContainer for \"ba543dc0385ea49dd90eff52609cc1c6bd7e1382576244a36619ed51157cd937\"" Apr 17 02:41:25.962378 containerd[1607]: time="2026-04-17T02:41:25.962317461Z" level=info msg="connecting to shim ba543dc0385ea49dd90eff52609cc1c6bd7e1382576244a36619ed51157cd937" address="unix:///run/containerd/s/b6dc04105ebde92b4a95feb5f2ead1b2926e37cc6438fb94587dc4c8a8f71366" protocol=ttrpc version=3 Apr 17 02:41:25.995586 systemd[1]: Started cri-containerd-ba543dc0385ea49dd90eff52609cc1c6bd7e1382576244a36619ed51157cd937.scope - libcontainer container ba543dc0385ea49dd90eff52609cc1c6bd7e1382576244a36619ed51157cd937. Apr 17 02:41:26.015964 kubelet[2774]: I0417 02:41:26.015408 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 02:41:26.159425 containerd[1607]: time="2026-04-17T02:41:26.159381593Z" level=info msg="StartContainer for \"ba543dc0385ea49dd90eff52609cc1c6bd7e1382576244a36619ed51157cd937\" returns successfully" Apr 17 02:41:26.161396 containerd[1607]: time="2026-04-17T02:41:26.161021662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 02:41:27.815782 systemd[1]: Started sshd@7-10.0.0.8:22-10.0.0.1:36508.service - OpenSSH per-connection server daemon (10.0.0.1:36508). Apr 17 02:41:28.135026 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 36508 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:41:28.136381 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:41:28.153391 systemd-logind[1598]: New session 8 of user core. Apr 17 02:41:28.160131 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 02:41:28.431625 sshd[4436]: Connection closed by 10.0.0.1 port 36508 Apr 17 02:41:28.433969 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Apr 17 02:41:28.443395 systemd[1]: sshd@7-10.0.0.8:22-10.0.0.1:36508.service: Deactivated successfully. Apr 17 02:41:28.457805 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 02:41:28.464747 systemd-logind[1598]: Session 8 logged out. Waiting for processes to exit. Apr 17 02:41:28.471613 systemd-logind[1598]: Removed session 8. Apr 17 02:41:29.016104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468805787.mount: Deactivated successfully. Apr 17 02:41:29.061895 containerd[1607]: time="2026-04-17T02:41:29.060873926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:29.064003 containerd[1607]: time="2026-04-17T02:41:29.063804661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 02:41:29.067061 containerd[1607]: time="2026-04-17T02:41:29.067002258Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:29.082165 containerd[1607]: time="2026-04-17T02:41:29.082037064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:29.083309 containerd[1607]: time="2026-04-17T02:41:29.083276753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.921876632s" Apr 17 02:41:29.083373 containerd[1607]: time="2026-04-17T02:41:29.083311307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 02:41:29.107374 containerd[1607]: time="2026-04-17T02:41:29.107301335Z" level=info msg="CreateContainer within sandbox \"bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 02:41:29.130981 containerd[1607]: time="2026-04-17T02:41:29.129316953Z" level=info msg="Container c5f28ad2b9d5e6c5ed0c40f19bbeb463ec99d641a30c380a991584d526406659: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:29.146762 containerd[1607]: time="2026-04-17T02:41:29.146657840Z" level=info msg="CreateContainer within sandbox \"bd9d043344b3d0323f1982016828c5cd4337d8f5dde8ac2d777921ddb7933d45\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c5f28ad2b9d5e6c5ed0c40f19bbeb463ec99d641a30c380a991584d526406659\"" Apr 17 02:41:29.148903 containerd[1607]: time="2026-04-17T02:41:29.148866954Z" level=info msg="StartContainer for \"c5f28ad2b9d5e6c5ed0c40f19bbeb463ec99d641a30c380a991584d526406659\"" Apr 17 02:41:29.152143 containerd[1607]: time="2026-04-17T02:41:29.152100663Z" level=info msg="connecting to shim c5f28ad2b9d5e6c5ed0c40f19bbeb463ec99d641a30c380a991584d526406659" address="unix:///run/containerd/s/b6dc04105ebde92b4a95feb5f2ead1b2926e37cc6438fb94587dc4c8a8f71366" protocol=ttrpc version=3 Apr 17 02:41:29.207588 systemd[1]: Started cri-containerd-c5f28ad2b9d5e6c5ed0c40f19bbeb463ec99d641a30c380a991584d526406659.scope - libcontainer container c5f28ad2b9d5e6c5ed0c40f19bbeb463ec99d641a30c380a991584d526406659. Apr 17 02:41:29.390095 containerd[1607]: time="2026-04-17T02:41:29.389837809Z" level=info msg="StartContainer for \"c5f28ad2b9d5e6c5ed0c40f19bbeb463ec99d641a30c380a991584d526406659\" returns successfully" Apr 17 02:41:30.050075 containerd[1607]: time="2026-04-17T02:41:30.049968023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f74589f89-kxd42,Uid:0212b919-43a9-472f-805b-f3b9834353cb,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:30.344669 systemd-networkd[1526]: cali4274d165afb: Link UP Apr 17 02:41:30.344907 systemd-networkd[1526]: cali4274d165afb: Gained carrier Apr 17 02:41:30.377002 kubelet[2774]: I0417 02:41:30.375858 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c9d84799f-nm4zz" podStartSLOduration=4.047220962 podStartE2EDuration="9.375840579s" podCreationTimestamp="2026-04-17 02:41:21 +0000 UTC" firstStartedPulling="2026-04-17 02:41:23.762221424 +0000 UTC m=+46.859339606" lastFinishedPulling="2026-04-17 02:41:29.090841045 +0000 UTC m=+52.187959223" observedRunningTime="2026-04-17 02:41:30.036675166 +0000 UTC m=+53.133793351" watchObservedRunningTime="2026-04-17 02:41:30.375840579 +0000 UTC m=+53.472958754" Apr 17 02:41:30.384088 containerd[1607]: 2026-04-17 02:41:30.142 [INFO][4506] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0 calico-apiserver-6f74589f89- calico-system 0212b919-43a9-472f-805b-f3b9834353cb 879 0 2026-04-17 02:40:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f74589f89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f74589f89-kxd42 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4274d165afb [] [] }} ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-kxd42" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--kxd42-" Apr 17 02:41:30.384088 containerd[1607]: 2026-04-17 02:41:30.143 [INFO][4506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-kxd42" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0" Apr 17 02:41:30.384088 containerd[1607]: 2026-04-17 02:41:30.199 [INFO][4520] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" HandleID="k8s-pod-network.73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Workload="localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0" Apr 17 02:41:30.384422 containerd[1607]: 2026-04-17 02:41:30.225 [INFO][4520] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" HandleID="k8s-pod-network.73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Workload="localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040fbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6f74589f89-kxd42", "timestamp":"2026-04-17 02:41:30.199471967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00041ab00)} Apr 17 02:41:30.384422 containerd[1607]: 2026-04-17 02:41:30.227 [INFO][4520] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 02:41:30.384422 containerd[1607]: 2026-04-17 02:41:30.227 [INFO][4520] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 02:41:30.384422 containerd[1607]: 2026-04-17 02:41:30.228 [INFO][4520] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 02:41:30.384422 containerd[1607]: 2026-04-17 02:41:30.236 [INFO][4520] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" host="localhost" Apr 17 02:41:30.384422 containerd[1607]: 2026-04-17 02:41:30.243 [INFO][4520] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 02:41:30.384422 containerd[1607]: 2026-04-17 02:41:30.263 [INFO][4520] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 02:41:30.384422 containerd[1607]: 2026-04-17 02:41:30.279 [INFO][4520] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:30.384422 containerd[1607]: 2026-04-17 02:41:30.289 [INFO][4520] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:30.384422 containerd[1607]: 2026-04-17 02:41:30.290 [INFO][4520] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" host="localhost" Apr 17 02:41:30.384645 containerd[1607]: 2026-04-17 02:41:30.299 [INFO][4520] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f Apr 17 02:41:30.384645 containerd[1607]: 2026-04-17 02:41:30.313 [INFO][4520] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" host="localhost" Apr 17 02:41:30.384645 containerd[1607]: 2026-04-17 02:41:30.335 [INFO][4520] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" host="localhost" Apr 17 02:41:30.384645 containerd[1607]: 2026-04-17 02:41:30.335 [INFO][4520] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" host="localhost" Apr 17 02:41:30.384645 containerd[1607]: 2026-04-17 02:41:30.335 [INFO][4520] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 02:41:30.384645 containerd[1607]: 2026-04-17 02:41:30.335 [INFO][4520] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" HandleID="k8s-pod-network.73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Workload="localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0" Apr 17 02:41:30.384859 containerd[1607]: 2026-04-17 02:41:30.339 [INFO][4506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-kxd42" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0", GenerateName:"calico-apiserver-6f74589f89-", Namespace:"calico-system", SelfLink:"", UID:"0212b919-43a9-472f-805b-f3b9834353cb", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f74589f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f74589f89-kxd42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4274d165afb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:30.385442 containerd[1607]: 2026-04-17 02:41:30.339 [INFO][4506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-kxd42" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0" Apr 17 02:41:30.385442 containerd[1607]: 2026-04-17 02:41:30.340 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4274d165afb ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-kxd42" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0" Apr 17 02:41:30.385442 containerd[1607]: 2026-04-17 02:41:30.345 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-kxd42" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0" Apr 17 02:41:30.385556 containerd[1607]: 2026-04-17 02:41:30.345 [INFO][4506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-kxd42" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0", GenerateName:"calico-apiserver-6f74589f89-", Namespace:"calico-system", SelfLink:"", UID:"0212b919-43a9-472f-805b-f3b9834353cb", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f74589f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f", Pod:"calico-apiserver-6f74589f89-kxd42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4274d165afb", MAC:"1a:d3:22:6f:5f:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:30.385778 containerd[1607]: 2026-04-17 02:41:30.381 [INFO][4506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-kxd42" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--kxd42-eth0" Apr 17 02:41:30.421163 containerd[1607]: time="2026-04-17T02:41:30.420852801Z" level=info msg="connecting to shim 73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f" address="unix:///run/containerd/s/fef52b767fbf93f961932f37dc439536050bf564d283fe84f3a31a6d41f76c60" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:41:30.444207 systemd[1]: Started cri-containerd-73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f.scope - libcontainer container 73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f. Apr 17 02:41:30.471625 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 02:41:30.521900 containerd[1607]: time="2026-04-17T02:41:30.521864274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f74589f89-kxd42,Uid:0212b919-43a9-472f-805b-f3b9834353cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f\"" Apr 17 02:41:30.524485 containerd[1607]: time="2026-04-17T02:41:30.524396813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 02:41:31.049394 kubelet[2774]: E0417 02:41:31.049366 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:31.049875 containerd[1607]: time="2026-04-17T02:41:31.049673905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rq6rz,Uid:ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:31.050293 containerd[1607]: time="2026-04-17T02:41:31.050250903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2xrj2,Uid:26beccb7-c020-4ba0-81e6-6a7d7a39dc14,Namespace:kube-system,Attempt:0,}" Apr 17 02:41:31.272390 systemd-networkd[1526]: cali1914fc708cb: Link UP Apr 17 02:41:31.272559 systemd-networkd[1526]: cali1914fc708cb: Gained carrier Apr 17 02:41:31.287458 containerd[1607]: 2026-04-17 02:41:31.126 [INFO][4604] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--rq6rz-eth0 goldmane-5b85766d88- calico-system ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8 881 0 2026-04-17 02:40:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-rq6rz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1914fc708cb [] [] }} ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Namespace="calico-system" Pod="goldmane-5b85766d88-rq6rz" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rq6rz-" Apr 17 02:41:31.287458 containerd[1607]: 2026-04-17 02:41:31.127 [INFO][4604] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Namespace="calico-system" Pod="goldmane-5b85766d88-rq6rz" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rq6rz-eth0" Apr 17 02:41:31.287458 containerd[1607]: 2026-04-17 02:41:31.185 [INFO][4631] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" HandleID="k8s-pod-network.9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Workload="localhost-k8s-goldmane--5b85766d88--rq6rz-eth0" Apr 17 02:41:31.287795 containerd[1607]: 2026-04-17 02:41:31.204 [INFO][4631] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" HandleID="k8s-pod-network.9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Workload="localhost-k8s-goldmane--5b85766d88--rq6rz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1db0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-rq6rz", "timestamp":"2026-04-17 02:41:31.185870878 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000173080)} Apr 17 02:41:31.287795 containerd[1607]: 2026-04-17 02:41:31.204 [INFO][4631] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 02:41:31.287795 containerd[1607]: 2026-04-17 02:41:31.204 [INFO][4631] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 02:41:31.287795 containerd[1607]: 2026-04-17 02:41:31.204 [INFO][4631] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 02:41:31.287795 containerd[1607]: 2026-04-17 02:41:31.208 [INFO][4631] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" host="localhost" Apr 17 02:41:31.287795 containerd[1607]: 2026-04-17 02:41:31.224 [INFO][4631] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 02:41:31.287795 containerd[1607]: 2026-04-17 02:41:31.230 [INFO][4631] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 02:41:31.287795 containerd[1607]: 2026-04-17 02:41:31.232 [INFO][4631] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:31.287795 containerd[1607]: 2026-04-17 02:41:31.235 [INFO][4631] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:31.287795 containerd[1607]: 2026-04-17 02:41:31.235 [INFO][4631] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" host="localhost" Apr 17 02:41:31.289512 containerd[1607]: 2026-04-17 02:41:31.240 [INFO][4631] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301 Apr 17 02:41:31.289512 containerd[1607]: 2026-04-17 02:41:31.252 [INFO][4631] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" host="localhost" Apr 17 02:41:31.289512 containerd[1607]: 2026-04-17 02:41:31.265 [INFO][4631] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" host="localhost" Apr 17 02:41:31.289512 containerd[1607]: 2026-04-17 02:41:31.265 [INFO][4631] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" host="localhost" Apr 17 02:41:31.289512 containerd[1607]: 2026-04-17 02:41:31.266 [INFO][4631] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 02:41:31.289512 containerd[1607]: 2026-04-17 02:41:31.266 [INFO][4631] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" HandleID="k8s-pod-network.9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Workload="localhost-k8s-goldmane--5b85766d88--rq6rz-eth0" Apr 17 02:41:31.289627 containerd[1607]: 2026-04-17 02:41:31.269 [INFO][4604] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Namespace="calico-system" Pod="goldmane-5b85766d88-rq6rz" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rq6rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--rq6rz-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-rq6rz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1914fc708cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:31.289627 containerd[1607]: 2026-04-17 02:41:31.270 [INFO][4604] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Namespace="calico-system" Pod="goldmane-5b85766d88-rq6rz" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rq6rz-eth0" Apr 17 02:41:31.289716 containerd[1607]: 2026-04-17 02:41:31.270 [INFO][4604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1914fc708cb ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Namespace="calico-system" Pod="goldmane-5b85766d88-rq6rz" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rq6rz-eth0" Apr 17 02:41:31.289716 containerd[1607]: 2026-04-17 02:41:31.272 [INFO][4604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Namespace="calico-system" Pod="goldmane-5b85766d88-rq6rz" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rq6rz-eth0" Apr 17 02:41:31.289773 containerd[1607]: 2026-04-17 02:41:31.272 [INFO][4604] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Namespace="calico-system" Pod="goldmane-5b85766d88-rq6rz" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rq6rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--rq6rz-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301", Pod:"goldmane-5b85766d88-rq6rz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1914fc708cb", MAC:"2a:75:15:3c:8c:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:31.289838 containerd[1607]: 2026-04-17 02:41:31.285 [INFO][4604] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" Namespace="calico-system" Pod="goldmane-5b85766d88-rq6rz" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--rq6rz-eth0" Apr 17 02:41:31.349465 containerd[1607]: time="2026-04-17T02:41:31.347453938Z" level=info msg="connecting to shim 9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301" address="unix:///run/containerd/s/ee0a962a58b7dbc05b6974ec65d739a1334d74e11f8dcc44ae7835faa0e1a222" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:41:31.403765 systemd[1]: Started cri-containerd-9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301.scope - libcontainer container 9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301. Apr 17 02:41:31.426635 systemd-networkd[1526]: cali2217ca67578: Link UP Apr 17 02:41:31.427820 systemd-networkd[1526]: cali2217ca67578: Gained carrier Apr 17 02:41:31.431878 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 02:41:31.454166 containerd[1607]: 2026-04-17 02:41:31.130 [INFO][4602] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0 coredns-674b8bbfcf- kube-system 26beccb7-c020-4ba0-81e6-6a7d7a39dc14 882 0 2026-04-17 02:40:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2xrj2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2217ca67578 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xrj2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xrj2-" Apr 17 02:41:31.454166 containerd[1607]: 2026-04-17 02:41:31.130 [INFO][4602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xrj2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0" Apr 17 02:41:31.454166 containerd[1607]: 2026-04-17 02:41:31.186 [INFO][4638] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" HandleID="k8s-pod-network.650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Workload="localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0" Apr 17 02:41:31.454757 containerd[1607]: 2026-04-17 02:41:31.204 [INFO][4638] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" HandleID="k8s-pod-network.650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Workload="localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033f570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2xrj2", "timestamp":"2026-04-17 02:41:31.186327302 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00016b080)} Apr 17 02:41:31.454757 containerd[1607]: 2026-04-17 02:41:31.204 [INFO][4638] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 02:41:31.454757 containerd[1607]: 2026-04-17 02:41:31.265 [INFO][4638] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 02:41:31.454757 containerd[1607]: 2026-04-17 02:41:31.266 [INFO][4638] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 02:41:31.454757 containerd[1607]: 2026-04-17 02:41:31.320 [INFO][4638] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" host="localhost" Apr 17 02:41:31.454757 containerd[1607]: 2026-04-17 02:41:31.343 [INFO][4638] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 02:41:31.454757 containerd[1607]: 2026-04-17 02:41:31.359 [INFO][4638] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 02:41:31.454757 containerd[1607]: 2026-04-17 02:41:31.366 [INFO][4638] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:31.454757 containerd[1607]: 2026-04-17 02:41:31.383 [INFO][4638] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:31.454757 containerd[1607]: 2026-04-17 02:41:31.384 [INFO][4638] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" host="localhost" Apr 17 02:41:31.457081 containerd[1607]: 2026-04-17 02:41:31.390 [INFO][4638] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9 Apr 17 02:41:31.457081 containerd[1607]: 2026-04-17 02:41:31.406 [INFO][4638] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" host="localhost" Apr 17 02:41:31.457081 containerd[1607]: 2026-04-17 02:41:31.421 [INFO][4638] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" host="localhost" Apr 17 02:41:31.457081 containerd[1607]: 2026-04-17 02:41:31.421 [INFO][4638] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" host="localhost" Apr 17 02:41:31.457081 containerd[1607]: 2026-04-17 02:41:31.421 [INFO][4638] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 02:41:31.457081 containerd[1607]: 2026-04-17 02:41:31.421 [INFO][4638] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" HandleID="k8s-pod-network.650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Workload="localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0" Apr 17 02:41:31.459399 containerd[1607]: 2026-04-17 02:41:31.423 [INFO][4602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xrj2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"26beccb7-c020-4ba0-81e6-6a7d7a39dc14", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2xrj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2217ca67578", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:31.459883 containerd[1607]: 2026-04-17 02:41:31.423 [INFO][4602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xrj2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0" Apr 17 02:41:31.459883 containerd[1607]: 2026-04-17 02:41:31.424 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2217ca67578 ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xrj2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0" Apr 17 02:41:31.459883 containerd[1607]: 2026-04-17 02:41:31.428 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xrj2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0" Apr 17 02:41:31.460002 containerd[1607]: 2026-04-17 02:41:31.429 [INFO][4602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xrj2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"26beccb7-c020-4ba0-81e6-6a7d7a39dc14", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9", Pod:"coredns-674b8bbfcf-2xrj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2217ca67578", MAC:"82:e0:9a:8b:48:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:31.460002 containerd[1607]: 2026-04-17 02:41:31.450 [INFO][4602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-2xrj2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2xrj2-eth0" Apr 17 02:41:31.527772 containerd[1607]: time="2026-04-17T02:41:31.527686445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-rq6rz,Uid:ffe1fc2f-f7d4-4a3c-b206-bd585e8d55b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301\"" Apr 17 02:41:31.542776 containerd[1607]: time="2026-04-17T02:41:31.542662079Z" level=info msg="connecting to shim 650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9" address="unix:///run/containerd/s/431f76bfd61aa54c12f3a3737cfe54d545c6bc297209e4be575a480ce7730906" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:41:31.566609 systemd-networkd[1526]: cali4274d165afb: Gained IPv6LL Apr 17 02:41:31.618273 systemd[1]: Started cri-containerd-650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9.scope - libcontainer container 650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9. Apr 17 02:41:31.636259 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 02:41:31.713076 containerd[1607]: time="2026-04-17T02:41:31.713027741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2xrj2,Uid:26beccb7-c020-4ba0-81e6-6a7d7a39dc14,Namespace:kube-system,Attempt:0,} returns sandbox id \"650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9\"" Apr 17 02:41:31.717323 kubelet[2774]: E0417 02:41:31.717278 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:31.735137 containerd[1607]: time="2026-04-17T02:41:31.735082844Z" level=info msg="CreateContainer within sandbox \"650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 02:41:31.766953 containerd[1607]: time="2026-04-17T02:41:31.766828097Z" level=info msg="Container 2718aa3b02fce448b3574ade7a10c6a7edac5ad4c81c7b3aad8e3f6ca8aa9c07: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:31.812465 containerd[1607]: time="2026-04-17T02:41:31.811712109Z" level=info msg="CreateContainer within sandbox \"650d163dcba49bfd48237e6e074d05da9a629995ba0dedce7ba2e4cb3b4184d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2718aa3b02fce448b3574ade7a10c6a7edac5ad4c81c7b3aad8e3f6ca8aa9c07\"" Apr 17 02:41:31.823741 containerd[1607]: time="2026-04-17T02:41:31.823236928Z" level=info msg="StartContainer for \"2718aa3b02fce448b3574ade7a10c6a7edac5ad4c81c7b3aad8e3f6ca8aa9c07\"" Apr 17 02:41:31.832955 containerd[1607]: time="2026-04-17T02:41:31.832839974Z" level=info msg="connecting to shim 2718aa3b02fce448b3574ade7a10c6a7edac5ad4c81c7b3aad8e3f6ca8aa9c07" address="unix:///run/containerd/s/431f76bfd61aa54c12f3a3737cfe54d545c6bc297209e4be575a480ce7730906" protocol=ttrpc version=3 Apr 17 02:41:31.891688 systemd[1]: Started cri-containerd-2718aa3b02fce448b3574ade7a10c6a7edac5ad4c81c7b3aad8e3f6ca8aa9c07.scope - libcontainer container 2718aa3b02fce448b3574ade7a10c6a7edac5ad4c81c7b3aad8e3f6ca8aa9c07. Apr 17 02:41:32.021177 containerd[1607]: time="2026-04-17T02:41:32.021081254Z" level=info msg="StartContainer for \"2718aa3b02fce448b3574ade7a10c6a7edac5ad4c81c7b3aad8e3f6ca8aa9c07\" returns successfully" Apr 17 02:41:32.097347 kubelet[2774]: E0417 02:41:32.097282 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:32.244166 kubelet[2774]: I0417 02:41:32.240525 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2xrj2" podStartSLOduration=49.238536384 podStartE2EDuration="49.238536384s" podCreationTimestamp="2026-04-17 02:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:41:32.228865399 +0000 UTC m=+55.325983590" watchObservedRunningTime="2026-04-17 02:41:32.238536384 +0000 UTC m=+55.335654571" Apr 17 02:41:32.528609 systemd-networkd[1526]: cali1914fc708cb: Gained IPv6LL Apr 17 02:41:32.718982 systemd-networkd[1526]: cali2217ca67578: Gained IPv6LL Apr 17 02:41:33.067977 containerd[1607]: time="2026-04-17T02:41:33.067818074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kt7f,Uid:7195f3e9-744b-4ed0-a1cb-10c30647c614,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:33.070606 containerd[1607]: time="2026-04-17T02:41:33.070189894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f74589f89-p697l,Uid:fc05bc4d-2ba5-414a-a341-24efa8a02e41,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:33.240612 kubelet[2774]: E0417 02:41:33.240564 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:33.473148 systemd[1]: Started sshd@8-10.0.0.8:22-10.0.0.1:38098.service - OpenSSH per-connection server daemon (10.0.0.1:38098). Apr 17 02:41:33.721417 systemd-networkd[1526]: cali50d5076ae01: Link UP Apr 17 02:41:33.721806 systemd-networkd[1526]: cali50d5076ae01: Gained carrier Apr 17 02:41:33.761429 sshd[4881]: Accepted publickey for core from 10.0.0.1 port 38098 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:41:33.764803 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.148 [INFO][4834] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0 calico-apiserver-6f74589f89- calico-system fc05bc4d-2ba5-414a-a341-24efa8a02e41 874 0 2026-04-17 02:40:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f74589f89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f74589f89-p697l eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali50d5076ae01 [] [] }} ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-p697l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--p697l-" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.148 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-p697l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.206 [INFO][4860] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" HandleID="k8s-pod-network.3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Workload="localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.241 [INFO][4860] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" HandleID="k8s-pod-network.3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Workload="localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fac0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6f74589f89-p697l", "timestamp":"2026-04-17 02:41:33.205657041 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00027a420)} Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.241 [INFO][4860] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.245 [INFO][4860] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.245 [INFO][4860] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.267 [INFO][4860] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" host="localhost" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.323 [INFO][4860] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.345 [INFO][4860] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.350 [INFO][4860] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.375 [INFO][4860] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.375 [INFO][4860] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" host="localhost" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.393 [INFO][4860] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1 Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.426 [INFO][4860] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" host="localhost" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.469 [INFO][4860] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" host="localhost" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.470 [INFO][4860] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" host="localhost" Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.472 [INFO][4860] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 02:41:33.767640 containerd[1607]: 2026-04-17 02:41:33.472 [INFO][4860] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" HandleID="k8s-pod-network.3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Workload="localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0" Apr 17 02:41:33.771853 containerd[1607]: 2026-04-17 02:41:33.491 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-p697l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0", GenerateName:"calico-apiserver-6f74589f89-", Namespace:"calico-system", SelfLink:"", UID:"fc05bc4d-2ba5-414a-a341-24efa8a02e41", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f74589f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f74589f89-p697l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali50d5076ae01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:33.771853 containerd[1607]: 2026-04-17 02:41:33.502 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-p697l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0" Apr 17 02:41:33.771853 containerd[1607]: 2026-04-17 02:41:33.519 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50d5076ae01 ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-p697l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0" Apr 17 02:41:33.771853 containerd[1607]: 2026-04-17 02:41:33.720 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-p697l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0" Apr 17 02:41:33.771853 containerd[1607]: 2026-04-17 02:41:33.722 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-p697l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0", GenerateName:"calico-apiserver-6f74589f89-", Namespace:"calico-system", SelfLink:"", UID:"fc05bc4d-2ba5-414a-a341-24efa8a02e41", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f74589f89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1", Pod:"calico-apiserver-6f74589f89-p697l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali50d5076ae01", MAC:"6a:b9:39:d1:5e:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:33.771853 containerd[1607]: 2026-04-17 02:41:33.745 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" Namespace="calico-system" Pod="calico-apiserver-6f74589f89-p697l" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f74589f89--p697l-eth0" Apr 17 02:41:33.793194 systemd-logind[1598]: New session 9 of user core. Apr 17 02:41:33.800648 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 02:41:33.927434 systemd-networkd[1526]: cali4ba8892d80b: Link UP Apr 17 02:41:33.930851 containerd[1607]: time="2026-04-17T02:41:33.930795313Z" level=info msg="connecting to shim 3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1" address="unix:///run/containerd/s/fd64a2fbc4e6e1b1f7787cf55a9e92ee3136215bc646b2ff362f8a815844f018" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:41:33.931179 systemd-networkd[1526]: cali4ba8892d80b: Gained carrier Apr 17 02:41:33.993802 systemd[1]: Started cri-containerd-3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1.scope - libcontainer container 3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1. Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.148 [INFO][4832] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5kt7f-eth0 csi-node-driver- calico-system 7195f3e9-744b-4ed0-a1cb-10c30647c614 723 0 2026-04-17 02:40:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5kt7f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4ba8892d80b [] [] }} ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Namespace="calico-system" Pod="csi-node-driver-5kt7f" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kt7f-" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.148 [INFO][4832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Namespace="calico-system" Pod="csi-node-driver-5kt7f" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kt7f-eth0" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.199 [INFO][4867] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" HandleID="k8s-pod-network.be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Workload="localhost-k8s-csi--node--driver--5kt7f-eth0" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.246 [INFO][4867] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" HandleID="k8s-pod-network.be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Workload="localhost-k8s-csi--node--driver--5kt7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5kt7f", "timestamp":"2026-04-17 02:41:33.199643119 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00031d340)} Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.246 [INFO][4867] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.470 [INFO][4867] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.471 [INFO][4867] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.541 [INFO][4867] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" host="localhost" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.718 [INFO][4867] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.771 [INFO][4867] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.810 [INFO][4867] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.828 [INFO][4867] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.830 [INFO][4867] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" host="localhost" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.839 [INFO][4867] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899 Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.890 [INFO][4867] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" host="localhost" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.912 [INFO][4867] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" host="localhost" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.912 [INFO][4867] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" host="localhost" Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.912 [INFO][4867] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 02:41:34.020227 containerd[1607]: 2026-04-17 02:41:33.912 [INFO][4867] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" HandleID="k8s-pod-network.be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Workload="localhost-k8s-csi--node--driver--5kt7f-eth0" Apr 17 02:41:34.021674 containerd[1607]: 2026-04-17 02:41:33.917 [INFO][4832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Namespace="calico-system" Pod="csi-node-driver-5kt7f" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kt7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5kt7f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7195f3e9-744b-4ed0-a1cb-10c30647c614", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5kt7f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4ba8892d80b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:34.021674 containerd[1607]: 2026-04-17 02:41:33.921 [INFO][4832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Namespace="calico-system" Pod="csi-node-driver-5kt7f" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kt7f-eth0" Apr 17 02:41:34.021674 containerd[1607]: 2026-04-17 02:41:33.922 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ba8892d80b ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Namespace="calico-system" Pod="csi-node-driver-5kt7f" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kt7f-eth0" Apr 17 02:41:34.021674 containerd[1607]: 2026-04-17 02:41:33.939 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Namespace="calico-system" Pod="csi-node-driver-5kt7f" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kt7f-eth0" Apr 17 02:41:34.021674 containerd[1607]: 2026-04-17 02:41:33.940 [INFO][4832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Namespace="calico-system" Pod="csi-node-driver-5kt7f" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kt7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5kt7f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7195f3e9-744b-4ed0-a1cb-10c30647c614", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899", Pod:"csi-node-driver-5kt7f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4ba8892d80b", MAC:"5e:8c:54:c6:eb:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:34.021674 containerd[1607]: 2026-04-17 02:41:34.008 [INFO][4832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" Namespace="calico-system" Pod="csi-node-driver-5kt7f" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kt7f-eth0" Apr 17 02:41:34.031953 sshd[4898]: Connection closed by 10.0.0.1 port 38098 Apr 17 02:41:34.032695 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Apr 17 02:41:34.042913 systemd[1]: sshd@8-10.0.0.8:22-10.0.0.1:38098.service: Deactivated successfully. Apr 17 02:41:34.043736 kubelet[2774]: E0417 02:41:34.043626 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:34.044545 containerd[1607]: time="2026-04-17T02:41:34.044502511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mhttd,Uid:a8c1eb92-3fc9-4407-927e-e756b7916bf9,Namespace:kube-system,Attempt:0,}" Apr 17 02:41:34.044771 containerd[1607]: time="2026-04-17T02:41:34.044689758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78755f586c-62jds,Uid:9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b,Namespace:calico-system,Attempt:0,}" Apr 17 02:41:34.047141 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 02:41:34.048486 systemd-logind[1598]: Session 9 logged out. Waiting for processes to exit. Apr 17 02:41:34.050856 systemd-logind[1598]: Removed session 9. Apr 17 02:41:34.094681 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 02:41:34.153635 containerd[1607]: time="2026-04-17T02:41:34.153553760Z" level=info msg="connecting to shim be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899" address="unix:///run/containerd/s/5c5c1f483d5fb354e0b744454d49889ef8b42c3b229b594dc8b1810136f8ffc1" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:41:34.310339 kubelet[2774]: E0417 02:41:34.309796 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:34.321138 containerd[1607]: time="2026-04-17T02:41:34.321022161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f74589f89-p697l,Uid:fc05bc4d-2ba5-414a-a341-24efa8a02e41,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1\"" Apr 17 02:41:34.468237 systemd[1]: Started cri-containerd-be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899.scope - libcontainer container be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899. Apr 17 02:41:34.751152 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 02:41:35.050226 containerd[1607]: time="2026-04-17T02:41:35.048159505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kt7f,Uid:7195f3e9-744b-4ed0-a1cb-10c30647c614,Namespace:calico-system,Attempt:0,} returns sandbox id \"be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899\"" Apr 17 02:41:35.216681 systemd-networkd[1526]: cali4ba8892d80b: Gained IPv6LL Apr 17 02:41:35.250877 systemd-networkd[1526]: cali1641c10b12a: Link UP Apr 17 02:41:35.251083 systemd-networkd[1526]: cali1641c10b12a: Gained carrier Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:34.220 [INFO][4987] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0 calico-kube-controllers-78755f586c- calico-system 9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b 878 0 2026-04-17 02:40:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78755f586c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78755f586c-62jds eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1641c10b12a [] [] }} ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Namespace="calico-system" Pod="calico-kube-controllers-78755f586c-62jds" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78755f586c--62jds-" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:34.223 [INFO][4987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Namespace="calico-system" Pod="calico-kube-controllers-78755f586c-62jds" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:34.738 [INFO][5039] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" HandleID="k8s-pod-network.45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Workload="localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:34.883 [INFO][5039] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" HandleID="k8s-pod-network.45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Workload="localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047c120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78755f586c-62jds", "timestamp":"2026-04-17 02:41:34.738522248 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000354000)} Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:34.884 [INFO][5039] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:34.886 [INFO][5039] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:34.890 [INFO][5039] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:34.970 [INFO][5039] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" host="localhost" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.001 [INFO][5039] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.035 [INFO][5039] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.066 [INFO][5039] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.104 [INFO][5039] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.105 [INFO][5039] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" host="localhost" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.127 [INFO][5039] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16 Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.152 [INFO][5039] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" host="localhost" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.201 [INFO][5039] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" host="localhost" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.201 [INFO][5039] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" host="localhost" Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.205 [INFO][5039] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 02:41:35.414785 containerd[1607]: 2026-04-17 02:41:35.205 [INFO][5039] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" HandleID="k8s-pod-network.45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Workload="localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0" Apr 17 02:41:35.416587 containerd[1607]: 2026-04-17 02:41:35.227 [INFO][4987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Namespace="calico-system" Pod="calico-kube-controllers-78755f586c-62jds" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0", GenerateName:"calico-kube-controllers-78755f586c-", Namespace:"calico-system", SelfLink:"", UID:"9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78755f586c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78755f586c-62jds", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1641c10b12a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:35.416587 containerd[1607]: 2026-04-17 02:41:35.227 [INFO][4987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Namespace="calico-system" Pod="calico-kube-controllers-78755f586c-62jds" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0" Apr 17 02:41:35.416587 containerd[1607]: 2026-04-17 02:41:35.227 [INFO][4987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1641c10b12a ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Namespace="calico-system" Pod="calico-kube-controllers-78755f586c-62jds" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0" Apr 17 02:41:35.416587 containerd[1607]: 2026-04-17 02:41:35.252 [INFO][4987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Namespace="calico-system" Pod="calico-kube-controllers-78755f586c-62jds" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0" Apr 17 02:41:35.416587 containerd[1607]: 2026-04-17 02:41:35.252 [INFO][4987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Namespace="calico-system" Pod="calico-kube-controllers-78755f586c-62jds" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0", GenerateName:"calico-kube-controllers-78755f586c-", Namespace:"calico-system", SelfLink:"", UID:"9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78755f586c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16", Pod:"calico-kube-controllers-78755f586c-62jds", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1641c10b12a", MAC:"ba:f8:a1:d2:37:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:35.416587 containerd[1607]: 2026-04-17 02:41:35.409 [INFO][4987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" Namespace="calico-system" Pod="calico-kube-controllers-78755f586c-62jds" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78755f586c--62jds-eth0" Apr 17 02:41:35.534467 systemd-networkd[1526]: cali50d5076ae01: Gained IPv6LL Apr 17 02:41:35.539441 containerd[1607]: time="2026-04-17T02:41:35.539067228Z" level=info msg="connecting to shim 45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16" address="unix:///run/containerd/s/e4de40affc52f59cdc0d534a64825d9b5991225e2d58d08ea6451655bda15928" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:41:35.622917 systemd-networkd[1526]: cali7ab9cce843f: Link UP Apr 17 02:41:35.626914 systemd-networkd[1526]: cali7ab9cce843f: Gained carrier Apr 17 02:41:35.643118 systemd[1]: Started cri-containerd-45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16.scope - libcontainer container 45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16. Apr 17 02:41:35.726581 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:34.968 [INFO][4964] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mhttd-eth0 coredns-674b8bbfcf- kube-system a8c1eb92-3fc9-4407-927e-e756b7916bf9 880 0 2026-04-17 02:40:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mhttd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ab9cce843f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhttd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhttd-" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:34.969 [INFO][4964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhttd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhttd-eth0" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.196 [INFO][5078] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" HandleID="k8s-pod-network.b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Workload="localhost-k8s-coredns--674b8bbfcf--mhttd-eth0" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.210 [INFO][5078] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" HandleID="k8s-pod-network.b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Workload="localhost-k8s-coredns--674b8bbfcf--mhttd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b29d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mhttd", "timestamp":"2026-04-17 02:41:35.196898245 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000586580)} Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.210 [INFO][5078] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.210 [INFO][5078] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.213 [INFO][5078] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.248 [INFO][5078] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" host="localhost" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.297 [INFO][5078] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.434 [INFO][5078] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.439 [INFO][5078] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.468 [INFO][5078] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.480 [INFO][5078] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" host="localhost" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.500 [INFO][5078] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.518 [INFO][5078] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" host="localhost" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.561 [INFO][5078] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" host="localhost" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.562 [INFO][5078] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" host="localhost" Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.562 [INFO][5078] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 02:41:35.733989 containerd[1607]: 2026-04-17 02:41:35.562 [INFO][5078] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" HandleID="k8s-pod-network.b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Workload="localhost-k8s-coredns--674b8bbfcf--mhttd-eth0" Apr 17 02:41:35.734507 containerd[1607]: 2026-04-17 02:41:35.572 [INFO][4964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhttd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhttd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mhttd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a8c1eb92-3fc9-4407-927e-e756b7916bf9", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mhttd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ab9cce843f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:35.734507 containerd[1607]: 2026-04-17 02:41:35.573 [INFO][4964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhttd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhttd-eth0" Apr 17 02:41:35.734507 containerd[1607]: 2026-04-17 02:41:35.573 [INFO][4964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ab9cce843f ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhttd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhttd-eth0" Apr 17 02:41:35.734507 containerd[1607]: 2026-04-17 02:41:35.630 [INFO][4964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhttd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhttd-eth0" Apr 17 02:41:35.734507 containerd[1607]: 2026-04-17 02:41:35.640 [INFO][4964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhttd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhttd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mhttd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a8c1eb92-3fc9-4407-927e-e756b7916bf9", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 2, 40, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de", Pod:"coredns-674b8bbfcf-mhttd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ab9cce843f", MAC:"e2:b7:39:e9:05:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 02:41:35.734507 containerd[1607]: 2026-04-17 02:41:35.730 [INFO][4964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhttd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhttd-eth0" Apr 17 02:41:35.807241 containerd[1607]: time="2026-04-17T02:41:35.807121599Z" level=info msg="connecting to shim b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de" address="unix:///run/containerd/s/d2744666f6799075bd599f75fc8492c92c4efbc6c0250bb5766a747287478ce0" namespace=k8s.io protocol=ttrpc version=3 Apr 17 02:41:35.836581 containerd[1607]: time="2026-04-17T02:41:35.836434288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78755f586c-62jds,Uid:9fdbdd91-ac27-4de1-a7d9-71f179ae5c6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16\"" Apr 17 02:41:35.954199 systemd[1]: Started cri-containerd-b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de.scope - libcontainer container b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de. Apr 17 02:41:36.034357 systemd-resolved[1424]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 02:41:36.129856 containerd[1607]: time="2026-04-17T02:41:36.129789238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mhttd,Uid:a8c1eb92-3fc9-4407-927e-e756b7916bf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de\"" Apr 17 02:41:36.131891 kubelet[2774]: E0417 02:41:36.131829 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:36.146974 containerd[1607]: time="2026-04-17T02:41:36.144706889Z" level=info msg="CreateContainer within sandbox \"b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 02:41:36.231854 containerd[1607]: time="2026-04-17T02:41:36.231452634Z" level=info msg="Container fa4b511841e822e758611e1f37189df5da8c2bfb7d7c33fa78830256f1ca591b: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:36.252127 containerd[1607]: time="2026-04-17T02:41:36.252076028Z" level=info msg="CreateContainer within sandbox \"b8d7a1e98cefef7691386d09a3d6a3e79aaaaafdd78203b2000ded3391b986de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa4b511841e822e758611e1f37189df5da8c2bfb7d7c33fa78830256f1ca591b\"" Apr 17 02:41:36.264962 containerd[1607]: time="2026-04-17T02:41:36.264847635Z" level=info msg="StartContainer for \"fa4b511841e822e758611e1f37189df5da8c2bfb7d7c33fa78830256f1ca591b\"" Apr 17 02:41:36.276314 containerd[1607]: time="2026-04-17T02:41:36.276095194Z" level=info msg="connecting to shim fa4b511841e822e758611e1f37189df5da8c2bfb7d7c33fa78830256f1ca591b" address="unix:///run/containerd/s/d2744666f6799075bd599f75fc8492c92c4efbc6c0250bb5766a747287478ce0" protocol=ttrpc version=3 Apr 17 02:41:36.320267 systemd[1]: Started cri-containerd-fa4b511841e822e758611e1f37189df5da8c2bfb7d7c33fa78830256f1ca591b.scope - libcontainer container fa4b511841e822e758611e1f37189df5da8c2bfb7d7c33fa78830256f1ca591b. Apr 17 02:41:36.420792 containerd[1607]: time="2026-04-17T02:41:36.420717766Z" level=info msg="StartContainer for \"fa4b511841e822e758611e1f37189df5da8c2bfb7d7c33fa78830256f1ca591b\" returns successfully" Apr 17 02:41:36.483526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442262945.mount: Deactivated successfully. Apr 17 02:41:36.513812 kubelet[2774]: E0417 02:41:36.513262 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:36.558902 systemd-networkd[1526]: cali1641c10b12a: Gained IPv6LL Apr 17 02:41:36.640997 kubelet[2774]: I0417 02:41:36.586860 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mhttd" podStartSLOduration=53.586834551 podStartE2EDuration="53.586834551s" podCreationTimestamp="2026-04-17 02:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 02:41:36.574903563 +0000 UTC m=+59.672021742" watchObservedRunningTime="2026-04-17 02:41:36.586834551 +0000 UTC m=+59.683952740" Apr 17 02:41:36.943893 systemd-networkd[1526]: cali7ab9cce843f: Gained IPv6LL Apr 17 02:41:37.004909 containerd[1607]: time="2026-04-17T02:41:37.004673244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:37.009082 containerd[1607]: time="2026-04-17T02:41:37.007663440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 02:41:37.016751 containerd[1607]: time="2026-04-17T02:41:37.016614257Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:37.027125 containerd[1607]: time="2026-04-17T02:41:37.027023610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:37.027743 containerd[1607]: time="2026-04-17T02:41:37.027675841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 6.503082317s" Apr 17 02:41:37.027743 containerd[1607]: time="2026-04-17T02:41:37.027737410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 02:41:37.031108 containerd[1607]: time="2026-04-17T02:41:37.031078864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 02:41:37.046869 containerd[1607]: time="2026-04-17T02:41:37.045220357Z" level=info msg="CreateContainer within sandbox \"73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 02:41:37.240702 containerd[1607]: time="2026-04-17T02:41:37.240088615Z" level=info msg="Container a4775ebb4286d612f9b385a78cd36b3d610c48cb997d09c02896191fadd76433: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:37.282966 containerd[1607]: time="2026-04-17T02:41:37.282836309Z" level=info msg="CreateContainer within sandbox \"73e6279f0ce0aeed5ebc45b8d7f854e547cd21eff57b9e3ea6d12efff95f509f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a4775ebb4286d612f9b385a78cd36b3d610c48cb997d09c02896191fadd76433\"" Apr 17 02:41:37.291362 containerd[1607]: time="2026-04-17T02:41:37.291237162Z" level=info msg="StartContainer for \"a4775ebb4286d612f9b385a78cd36b3d610c48cb997d09c02896191fadd76433\"" Apr 17 02:41:37.313085 containerd[1607]: time="2026-04-17T02:41:37.313036223Z" level=info msg="connecting to shim a4775ebb4286d612f9b385a78cd36b3d610c48cb997d09c02896191fadd76433" address="unix:///run/containerd/s/fef52b767fbf93f961932f37dc439536050bf564d283fe84f3a31a6d41f76c60" protocol=ttrpc version=3 Apr 17 02:41:37.394382 systemd[1]: Started cri-containerd-a4775ebb4286d612f9b385a78cd36b3d610c48cb997d09c02896191fadd76433.scope - libcontainer container a4775ebb4286d612f9b385a78cd36b3d610c48cb997d09c02896191fadd76433. Apr 17 02:41:37.663726 kubelet[2774]: E0417 02:41:37.663665 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:37.713214 containerd[1607]: time="2026-04-17T02:41:37.713161162Z" level=info msg="StartContainer for \"a4775ebb4286d612f9b385a78cd36b3d610c48cb997d09c02896191fadd76433\" returns successfully" Apr 17 02:41:38.766890 kubelet[2774]: E0417 02:41:38.766798 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:39.059104 systemd[1]: Started sshd@9-10.0.0.8:22-10.0.0.1:38106.service - OpenSSH per-connection server daemon (10.0.0.1:38106). Apr 17 02:41:39.474017 sshd[5315]: Accepted publickey for core from 10.0.0.1 port 38106 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:41:39.485689 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:41:39.722075 systemd-logind[1598]: New session 10 of user core. Apr 17 02:41:39.735272 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 02:41:39.985967 kubelet[2774]: I0417 02:41:39.984650 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 02:41:40.000712 kubelet[2774]: E0417 02:41:39.996095 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:40.363432 sshd[5318]: Connection closed by 10.0.0.1 port 38106 Apr 17 02:41:40.397209 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Apr 17 02:41:40.421063 systemd[1]: sshd@9-10.0.0.8:22-10.0.0.1:38106.service: Deactivated successfully. Apr 17 02:41:40.451403 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 02:41:40.470201 systemd-logind[1598]: Session 10 logged out. Waiting for processes to exit. Apr 17 02:41:40.492491 systemd-logind[1598]: Removed session 10. Apr 17 02:41:41.550898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122179294.mount: Deactivated successfully. Apr 17 02:41:42.730468 containerd[1607]: time="2026-04-17T02:41:42.730347610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:42.732891 containerd[1607]: time="2026-04-17T02:41:42.732859635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 02:41:42.734307 containerd[1607]: time="2026-04-17T02:41:42.734265446Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:42.871220 containerd[1607]: time="2026-04-17T02:41:42.871137816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:42.905043 containerd[1607]: time="2026-04-17T02:41:42.904024689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.872573981s" Apr 17 02:41:42.907783 containerd[1607]: time="2026-04-17T02:41:42.907483672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 02:41:42.924435 containerd[1607]: time="2026-04-17T02:41:42.924364218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 02:41:42.936099 containerd[1607]: time="2026-04-17T02:41:42.936042976Z" level=info msg="CreateContainer within sandbox \"9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 02:41:42.968315 containerd[1607]: time="2026-04-17T02:41:42.965636509Z" level=info msg="Container 69867067356d415bc31228ba5681bf834acf95ddb3cf0e196e3868aceaafa17d: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:42.987039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325125386.mount: Deactivated successfully. Apr 17 02:41:43.014665 containerd[1607]: time="2026-04-17T02:41:43.014607840Z" level=info msg="CreateContainer within sandbox \"9f60b1b8c616f00f880e85616704d00dfd8991db5ef059ba424889a11e505301\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"69867067356d415bc31228ba5681bf834acf95ddb3cf0e196e3868aceaafa17d\"" Apr 17 02:41:43.021109 containerd[1607]: time="2026-04-17T02:41:43.020967017Z" level=info msg="StartContainer for \"69867067356d415bc31228ba5681bf834acf95ddb3cf0e196e3868aceaafa17d\"" Apr 17 02:41:43.024897 containerd[1607]: time="2026-04-17T02:41:43.024869569Z" level=info msg="connecting to shim 69867067356d415bc31228ba5681bf834acf95ddb3cf0e196e3868aceaafa17d" address="unix:///run/containerd/s/ee0a962a58b7dbc05b6974ec65d739a1334d74e11f8dcc44ae7835faa0e1a222" protocol=ttrpc version=3 Apr 17 02:41:43.064601 systemd[1]: Started cri-containerd-69867067356d415bc31228ba5681bf834acf95ddb3cf0e196e3868aceaafa17d.scope - libcontainer container 69867067356d415bc31228ba5681bf834acf95ddb3cf0e196e3868aceaafa17d. Apr 17 02:41:43.304425 containerd[1607]: time="2026-04-17T02:41:43.303709666Z" level=info msg="StartContainer for \"69867067356d415bc31228ba5681bf834acf95ddb3cf0e196e3868aceaafa17d\" returns successfully" Apr 17 02:41:43.540813 containerd[1607]: time="2026-04-17T02:41:43.540745352Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:43.545656 containerd[1607]: time="2026-04-17T02:41:43.545125070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 02:41:43.566235 containerd[1607]: time="2026-04-17T02:41:43.565968977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 641.552817ms" Apr 17 02:41:43.566235 containerd[1607]: time="2026-04-17T02:41:43.566037997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 02:41:43.573722 containerd[1607]: time="2026-04-17T02:41:43.573668292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 02:41:43.681287 containerd[1607]: time="2026-04-17T02:41:43.681145094Z" level=info msg="CreateContainer within sandbox \"3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 02:41:43.709958 containerd[1607]: time="2026-04-17T02:41:43.709313654Z" level=info msg="Container fb79b465f4f8b699777ce32aefffbea394c201b593f4537c14211f0886ae8593: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:43.773747 containerd[1607]: time="2026-04-17T02:41:43.773671283Z" level=info msg="CreateContainer within sandbox \"3a9d9a1589dbe7cab2b9aa4c87cf617a4ca7356b609b6d5435d434e01baeabd1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fb79b465f4f8b699777ce32aefffbea394c201b593f4537c14211f0886ae8593\"" Apr 17 02:41:43.787975 containerd[1607]: time="2026-04-17T02:41:43.787275036Z" level=info msg="StartContainer for \"fb79b465f4f8b699777ce32aefffbea394c201b593f4537c14211f0886ae8593\"" Apr 17 02:41:43.810497 containerd[1607]: time="2026-04-17T02:41:43.810318217Z" level=info msg="connecting to shim fb79b465f4f8b699777ce32aefffbea394c201b593f4537c14211f0886ae8593" address="unix:///run/containerd/s/fd64a2fbc4e6e1b1f7787cf55a9e92ee3136215bc646b2ff362f8a815844f018" protocol=ttrpc version=3 Apr 17 02:41:43.845289 systemd[1]: Started cri-containerd-fb79b465f4f8b699777ce32aefffbea394c201b593f4537c14211f0886ae8593.scope - libcontainer container fb79b465f4f8b699777ce32aefffbea394c201b593f4537c14211f0886ae8593. Apr 17 02:41:44.102990 containerd[1607]: time="2026-04-17T02:41:44.101886172Z" level=info msg="StartContainer for \"fb79b465f4f8b699777ce32aefffbea394c201b593f4537c14211f0886ae8593\" returns successfully" Apr 17 02:41:44.246842 kubelet[2774]: I0417 02:41:44.245658 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f74589f89-kxd42" podStartSLOduration=43.739324848 podStartE2EDuration="50.245643116s" podCreationTimestamp="2026-04-17 02:40:54 +0000 UTC" firstStartedPulling="2026-04-17 02:41:30.524036771 +0000 UTC m=+53.621154946" lastFinishedPulling="2026-04-17 02:41:37.030355035 +0000 UTC m=+60.127473214" observedRunningTime="2026-04-17 02:41:38.885807301 +0000 UTC m=+61.982925505" watchObservedRunningTime="2026-04-17 02:41:44.245643116 +0000 UTC m=+67.342761300" Apr 17 02:41:44.309486 kubelet[2774]: I0417 02:41:44.308878 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-rq6rz" podStartSLOduration=37.925423768 podStartE2EDuration="49.30874121s" podCreationTimestamp="2026-04-17 02:40:55 +0000 UTC" firstStartedPulling="2026-04-17 02:41:31.537146202 +0000 UTC m=+54.634264377" lastFinishedPulling="2026-04-17 02:41:42.920463643 +0000 UTC m=+66.017581819" observedRunningTime="2026-04-17 02:41:44.246126073 +0000 UTC m=+67.343244266" watchObservedRunningTime="2026-04-17 02:41:44.30874121 +0000 UTC m=+67.405859393" Apr 17 02:41:45.451262 systemd[1]: Started sshd@10-10.0.0.8:22-10.0.0.1:55788.service - OpenSSH per-connection server daemon (10.0.0.1:55788). Apr 17 02:41:45.649791 sshd[5480]: Accepted publickey for core from 10.0.0.1 port 55788 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:41:45.685354 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:41:45.855711 systemd-logind[1598]: New session 11 of user core. Apr 17 02:41:45.863775 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 02:41:46.352753 kubelet[2774]: I0417 02:41:46.352436 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f74589f89-p697l" podStartSLOduration=43.344773303 podStartE2EDuration="52.352236583s" podCreationTimestamp="2026-04-17 02:40:54 +0000 UTC" firstStartedPulling="2026-04-17 02:41:34.560870354 +0000 UTC m=+57.657988543" lastFinishedPulling="2026-04-17 02:41:43.568333648 +0000 UTC m=+66.665451823" observedRunningTime="2026-04-17 02:41:44.313168669 +0000 UTC m=+67.410286852" watchObservedRunningTime="2026-04-17 02:41:46.352236583 +0000 UTC m=+69.449354774" Apr 17 02:41:46.533475 sshd[5488]: Connection closed by 10.0.0.1 port 55788 Apr 17 02:41:46.545998 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Apr 17 02:41:46.634505 systemd[1]: sshd@10-10.0.0.8:22-10.0.0.1:55788.service: Deactivated successfully. Apr 17 02:41:46.637039 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 02:41:46.647256 systemd-logind[1598]: Session 11 logged out. Waiting for processes to exit. Apr 17 02:41:46.656533 systemd-logind[1598]: Removed session 11. Apr 17 02:41:47.013775 containerd[1607]: time="2026-04-17T02:41:47.013390045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:47.019841 containerd[1607]: time="2026-04-17T02:41:47.016090383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 02:41:47.024309 containerd[1607]: time="2026-04-17T02:41:47.024218894Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:47.035689 containerd[1607]: time="2026-04-17T02:41:47.035512271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:47.039043 containerd[1607]: time="2026-04-17T02:41:47.038892803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 3.465131991s" Apr 17 02:41:47.039043 containerd[1607]: time="2026-04-17T02:41:47.038969288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 02:41:47.041979 containerd[1607]: time="2026-04-17T02:41:47.041952988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 02:41:47.094409 containerd[1607]: time="2026-04-17T02:41:47.094256213Z" level=info msg="CreateContainer within sandbox \"be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 02:41:47.144965 containerd[1607]: time="2026-04-17T02:41:47.139327580Z" level=info msg="Container 967a96ea2cb7bcb9cc06c821734ff54e8e03b8809f71d21ba70d35a5ee8cce92: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:47.196380 containerd[1607]: time="2026-04-17T02:41:47.196284724Z" level=info msg="CreateContainer within sandbox \"be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"967a96ea2cb7bcb9cc06c821734ff54e8e03b8809f71d21ba70d35a5ee8cce92\"" Apr 17 02:41:47.215292 containerd[1607]: time="2026-04-17T02:41:47.215208008Z" level=info msg="StartContainer for \"967a96ea2cb7bcb9cc06c821734ff54e8e03b8809f71d21ba70d35a5ee8cce92\"" Apr 17 02:41:47.233158 containerd[1607]: time="2026-04-17T02:41:47.233106393Z" level=info msg="connecting to shim 967a96ea2cb7bcb9cc06c821734ff54e8e03b8809f71d21ba70d35a5ee8cce92" address="unix:///run/containerd/s/5c5c1f483d5fb354e0b744454d49889ef8b42c3b229b594dc8b1810136f8ffc1" protocol=ttrpc version=3 Apr 17 02:41:47.286632 systemd[1]: Started cri-containerd-967a96ea2cb7bcb9cc06c821734ff54e8e03b8809f71d21ba70d35a5ee8cce92.scope - libcontainer container 967a96ea2cb7bcb9cc06c821734ff54e8e03b8809f71d21ba70d35a5ee8cce92. Apr 17 02:41:47.838778 containerd[1607]: time="2026-04-17T02:41:47.838435270Z" level=info msg="StartContainer for \"967a96ea2cb7bcb9cc06c821734ff54e8e03b8809f71d21ba70d35a5ee8cce92\" returns successfully" Apr 17 02:41:49.046056 kubelet[2774]: E0417 02:41:49.046017 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:51.547477 systemd[1]: Started sshd@11-10.0.0.8:22-10.0.0.1:34136.service - OpenSSH per-connection server daemon (10.0.0.1:34136). Apr 17 02:41:51.854590 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 34136 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:41:51.858043 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:41:51.913074 systemd-logind[1598]: New session 12 of user core. Apr 17 02:41:51.926309 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 02:41:52.672888 sshd[5552]: Connection closed by 10.0.0.1 port 34136 Apr 17 02:41:52.674267 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Apr 17 02:41:52.764608 systemd[1]: sshd@11-10.0.0.8:22-10.0.0.1:34136.service: Deactivated successfully. Apr 17 02:41:52.788319 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 02:41:52.805970 systemd-logind[1598]: Session 12 logged out. Waiting for processes to exit. Apr 17 02:41:52.814090 systemd-logind[1598]: Removed session 12. Apr 17 02:41:54.422122 containerd[1607]: time="2026-04-17T02:41:54.421826353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:54.427181 containerd[1607]: time="2026-04-17T02:41:54.421970807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 02:41:54.429584 containerd[1607]: time="2026-04-17T02:41:54.429446482Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:54.433516 containerd[1607]: time="2026-04-17T02:41:54.433454713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:54.434254 containerd[1607]: time="2026-04-17T02:41:54.434194984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 7.392215489s" Apr 17 02:41:54.434254 containerd[1607]: time="2026-04-17T02:41:54.434236428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 02:41:54.438103 containerd[1607]: time="2026-04-17T02:41:54.438068863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 02:41:54.469610 containerd[1607]: time="2026-04-17T02:41:54.469548736Z" level=info msg="CreateContainer within sandbox \"45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 02:41:54.502488 containerd[1607]: time="2026-04-17T02:41:54.501070911Z" level=info msg="Container bdf5ad7bea1cc4f22f11cb8742d1b1d0668c3feb1d8f7388aa87797ea69d82ae: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:54.536041 containerd[1607]: time="2026-04-17T02:41:54.535985800Z" level=info msg="CreateContainer within sandbox \"45686ce51eada7186e439161331614e6e840d33cd43769e36085ae9a06f91b16\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bdf5ad7bea1cc4f22f11cb8742d1b1d0668c3feb1d8f7388aa87797ea69d82ae\"" Apr 17 02:41:54.537091 containerd[1607]: time="2026-04-17T02:41:54.537062529Z" level=info msg="StartContainer for \"bdf5ad7bea1cc4f22f11cb8742d1b1d0668c3feb1d8f7388aa87797ea69d82ae\"" Apr 17 02:41:54.545994 containerd[1607]: time="2026-04-17T02:41:54.545160936Z" level=info msg="connecting to shim bdf5ad7bea1cc4f22f11cb8742d1b1d0668c3feb1d8f7388aa87797ea69d82ae" address="unix:///run/containerd/s/e4de40affc52f59cdc0d534a64825d9b5991225e2d58d08ea6451655bda15928" protocol=ttrpc version=3 Apr 17 02:41:54.609023 systemd[1]: Started cri-containerd-bdf5ad7bea1cc4f22f11cb8742d1b1d0668c3feb1d8f7388aa87797ea69d82ae.scope - libcontainer container bdf5ad7bea1cc4f22f11cb8742d1b1d0668c3feb1d8f7388aa87797ea69d82ae. Apr 17 02:41:54.687905 containerd[1607]: time="2026-04-17T02:41:54.685691686Z" level=info msg="StartContainer for \"bdf5ad7bea1cc4f22f11cb8742d1b1d0668c3feb1d8f7388aa87797ea69d82ae\" returns successfully" Apr 17 02:41:55.023390 kubelet[2774]: I0417 02:41:55.022309 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78755f586c-62jds" podStartSLOduration=40.427080864 podStartE2EDuration="59.022288088s" podCreationTimestamp="2026-04-17 02:40:56 +0000 UTC" firstStartedPulling="2026-04-17 02:41:35.841145076 +0000 UTC m=+58.938263267" lastFinishedPulling="2026-04-17 02:41:54.436352313 +0000 UTC m=+77.533470491" observedRunningTime="2026-04-17 02:41:55.02155417 +0000 UTC m=+78.118672364" watchObservedRunningTime="2026-04-17 02:41:55.022288088 +0000 UTC m=+78.119406276" Apr 17 02:41:57.044711 kubelet[2774]: E0417 02:41:57.044639 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:41:57.742250 systemd[1]: Started sshd@12-10.0.0.8:22-10.0.0.1:34144.service - OpenSSH per-connection server daemon (10.0.0.1:34144). Apr 17 02:41:57.884139 sshd[5695]: Accepted publickey for core from 10.0.0.1 port 34144 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:41:57.885826 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:41:57.917488 systemd-logind[1598]: New session 13 of user core. Apr 17 02:41:57.929699 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 02:41:58.140583 containerd[1607]: time="2026-04-17T02:41:58.140287618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:58.141203 containerd[1607]: time="2026-04-17T02:41:58.140896750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 02:41:58.142072 containerd[1607]: time="2026-04-17T02:41:58.141996414Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:58.144450 containerd[1607]: time="2026-04-17T02:41:58.144413301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 02:41:58.148961 containerd[1607]: time="2026-04-17T02:41:58.147150905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.70900473s" Apr 17 02:41:58.148961 containerd[1607]: time="2026-04-17T02:41:58.147327384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 02:41:58.198316 containerd[1607]: time="2026-04-17T02:41:58.198209475Z" level=info msg="CreateContainer within sandbox \"be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 02:41:58.295532 containerd[1607]: time="2026-04-17T02:41:58.295311123Z" level=info msg="Container 5cd61c20897f0f2978d8fa11b7fc1abfb09e6ea254840c37c5417b89b86d85bf: CDI devices from CRI Config.CDIDevices: []" Apr 17 02:41:58.324353 containerd[1607]: time="2026-04-17T02:41:58.324307665Z" level=info msg="CreateContainer within sandbox \"be61a88ebdf969e96665eed8323fd4000bc1715680cad7b14dfa33c85fa4d899\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5cd61c20897f0f2978d8fa11b7fc1abfb09e6ea254840c37c5417b89b86d85bf\"" Apr 17 02:41:58.327811 containerd[1607]: time="2026-04-17T02:41:58.327559977Z" level=info msg="StartContainer for \"5cd61c20897f0f2978d8fa11b7fc1abfb09e6ea254840c37c5417b89b86d85bf\"" Apr 17 02:41:58.343244 containerd[1607]: time="2026-04-17T02:41:58.343152512Z" level=info msg="connecting to shim 5cd61c20897f0f2978d8fa11b7fc1abfb09e6ea254840c37c5417b89b86d85bf" address="unix:///run/containerd/s/5c5c1f483d5fb354e0b744454d49889ef8b42c3b229b594dc8b1810136f8ffc1" protocol=ttrpc version=3 Apr 17 02:41:58.424643 systemd[1]: Started cri-containerd-5cd61c20897f0f2978d8fa11b7fc1abfb09e6ea254840c37c5417b89b86d85bf.scope - libcontainer container 5cd61c20897f0f2978d8fa11b7fc1abfb09e6ea254840c37c5417b89b86d85bf. Apr 17 02:41:58.594004 kubelet[2774]: E0417 02:41:58.592983 2774 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7195f3e9_744b_4ed0_a1cb_10c30647c614.slice/cri-containerd-5cd61c20897f0f2978d8fa11b7fc1abfb09e6ea254840c37c5417b89b86d85bf.scope\": RecentStats: unable to find data in memory cache]" Apr 17 02:41:58.644782 containerd[1607]: time="2026-04-17T02:41:58.644655728Z" level=info msg="StartContainer for \"5cd61c20897f0f2978d8fa11b7fc1abfb09e6ea254840c37c5417b89b86d85bf\" returns successfully" Apr 17 02:41:58.664997 sshd[5698]: Connection closed by 10.0.0.1 port 34144 Apr 17 02:41:58.664506 sshd-session[5695]: pam_unix(sshd:session): session closed for user core Apr 17 02:41:58.708963 systemd[1]: Started sshd@13-10.0.0.8:22-10.0.0.1:34154.service - OpenSSH per-connection server daemon (10.0.0.1:34154). Apr 17 02:41:58.710070 systemd[1]: sshd@12-10.0.0.8:22-10.0.0.1:34144.service: Deactivated successfully. Apr 17 02:41:58.716079 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 02:41:58.724476 systemd-logind[1598]: Session 13 logged out. Waiting for processes to exit. Apr 17 02:41:58.725785 systemd-logind[1598]: Removed session 13. Apr 17 02:41:58.768541 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 34154 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:41:58.770282 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:41:58.792259 systemd-logind[1598]: New session 14 of user core. Apr 17 02:41:58.800173 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 02:41:59.055345 sshd[5748]: Connection closed by 10.0.0.1 port 34154 Apr 17 02:41:59.063552 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Apr 17 02:41:59.086757 systemd[1]: sshd@13-10.0.0.8:22-10.0.0.1:34154.service: Deactivated successfully. Apr 17 02:41:59.093429 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 02:41:59.094478 systemd-logind[1598]: Session 14 logged out. Waiting for processes to exit. Apr 17 02:41:59.097237 systemd[1]: Started sshd@14-10.0.0.8:22-10.0.0.1:34172.service - OpenSSH per-connection server daemon (10.0.0.1:34172). Apr 17 02:41:59.105465 systemd-logind[1598]: Removed session 14. Apr 17 02:41:59.129235 kubelet[2774]: I0417 02:41:59.128764 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5kt7f" podStartSLOduration=40.072049495 podStartE2EDuration="1m3.128721799s" podCreationTimestamp="2026-04-17 02:40:56 +0000 UTC" firstStartedPulling="2026-04-17 02:41:35.099508131 +0000 UTC m=+58.196626305" lastFinishedPulling="2026-04-17 02:41:58.156180419 +0000 UTC m=+81.253298609" observedRunningTime="2026-04-17 02:41:59.128191716 +0000 UTC m=+82.225309899" watchObservedRunningTime="2026-04-17 02:41:59.128721799 +0000 UTC m=+82.225839975" Apr 17 02:41:59.214071 sshd[5760]: Accepted publickey for core from 10.0.0.1 port 34172 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:41:59.216283 sshd-session[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:41:59.223461 systemd-logind[1598]: New session 15 of user core. Apr 17 02:41:59.233095 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 02:41:59.265532 kubelet[2774]: I0417 02:41:59.265466 2774 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 02:41:59.266610 kubelet[2774]: I0417 02:41:59.266567 2774 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 02:41:59.367324 sshd[5763]: Connection closed by 10.0.0.1 port 34172 Apr 17 02:41:59.367698 sshd-session[5760]: pam_unix(sshd:session): session closed for user core Apr 17 02:41:59.371333 systemd[1]: sshd@14-10.0.0.8:22-10.0.0.1:34172.service: Deactivated successfully. Apr 17 02:41:59.373146 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 02:41:59.389157 systemd-logind[1598]: Session 15 logged out. Waiting for processes to exit. Apr 17 02:41:59.390411 systemd-logind[1598]: Removed session 15. Apr 17 02:42:04.392678 systemd[1]: Started sshd@15-10.0.0.8:22-10.0.0.1:46766.service - OpenSSH per-connection server daemon (10.0.0.1:46766). Apr 17 02:42:04.529982 sshd[5787]: Accepted publickey for core from 10.0.0.1 port 46766 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:42:04.531711 sshd-session[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:04.538561 systemd-logind[1598]: New session 16 of user core. Apr 17 02:42:04.548034 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 02:42:04.729625 sshd[5790]: Connection closed by 10.0.0.1 port 46766 Apr 17 02:42:04.730148 sshd-session[5787]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:04.741707 systemd[1]: sshd@15-10.0.0.8:22-10.0.0.1:46766.service: Deactivated successfully. Apr 17 02:42:04.747602 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 02:42:04.748657 systemd-logind[1598]: Session 16 logged out. Waiting for processes to exit. Apr 17 02:42:04.751000 systemd[1]: Started sshd@16-10.0.0.8:22-10.0.0.1:46772.service - OpenSSH per-connection server daemon (10.0.0.1:46772). Apr 17 02:42:04.754590 systemd-logind[1598]: Removed session 16. Apr 17 02:42:04.829537 sshd[5803]: Accepted publickey for core from 10.0.0.1 port 46772 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:42:04.831514 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:04.841705 systemd-logind[1598]: New session 17 of user core. Apr 17 02:42:04.847096 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 02:42:05.143988 sshd[5806]: Connection closed by 10.0.0.1 port 46772 Apr 17 02:42:05.145284 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:05.160834 systemd[1]: sshd@16-10.0.0.8:22-10.0.0.1:46772.service: Deactivated successfully. Apr 17 02:42:05.170716 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 02:42:05.174424 systemd-logind[1598]: Session 17 logged out. Waiting for processes to exit. Apr 17 02:42:05.184712 systemd[1]: Started sshd@17-10.0.0.8:22-10.0.0.1:46782.service - OpenSSH per-connection server daemon (10.0.0.1:46782). Apr 17 02:42:05.186425 systemd-logind[1598]: Removed session 17. Apr 17 02:42:05.297363 sshd[5818]: Accepted publickey for core from 10.0.0.1 port 46782 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:42:05.299050 sshd-session[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:05.313450 systemd-logind[1598]: New session 18 of user core. Apr 17 02:42:05.322238 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 02:42:06.043559 kubelet[2774]: E0417 02:42:06.043521 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:06.265967 sshd[5821]: Connection closed by 10.0.0.1 port 46782 Apr 17 02:42:06.266544 sshd-session[5818]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:06.290544 systemd[1]: Started sshd@18-10.0.0.8:22-10.0.0.1:46794.service - OpenSSH per-connection server daemon (10.0.0.1:46794). Apr 17 02:42:06.291036 systemd[1]: sshd@17-10.0.0.8:22-10.0.0.1:46782.service: Deactivated successfully. Apr 17 02:42:06.293589 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 02:42:06.299636 systemd-logind[1598]: Session 18 logged out. Waiting for processes to exit. Apr 17 02:42:06.322576 systemd-logind[1598]: Removed session 18. Apr 17 02:42:06.466657 sshd[5845]: Accepted publickey for core from 10.0.0.1 port 46794 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:42:06.475674 sshd-session[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:06.508824 systemd-logind[1598]: New session 19 of user core. Apr 17 02:42:06.522777 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 02:42:07.190247 sshd[5852]: Connection closed by 10.0.0.1 port 46794 Apr 17 02:42:07.197946 sshd-session[5845]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:07.221280 systemd[1]: Started sshd@19-10.0.0.8:22-10.0.0.1:46798.service - OpenSSH per-connection server daemon (10.0.0.1:46798). Apr 17 02:42:07.223250 systemd[1]: sshd@18-10.0.0.8:22-10.0.0.1:46794.service: Deactivated successfully. Apr 17 02:42:07.230522 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 02:42:07.235064 systemd-logind[1598]: Session 19 logged out. Waiting for processes to exit. Apr 17 02:42:07.239795 systemd-logind[1598]: Removed session 19. Apr 17 02:42:07.445819 sshd[5861]: Accepted publickey for core from 10.0.0.1 port 46798 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:42:07.449510 sshd-session[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:07.467689 systemd-logind[1598]: New session 20 of user core. Apr 17 02:42:07.477099 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 02:42:07.721394 sshd[5867]: Connection closed by 10.0.0.1 port 46798 Apr 17 02:42:07.723258 sshd-session[5861]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:07.736756 systemd[1]: sshd@19-10.0.0.8:22-10.0.0.1:46798.service: Deactivated successfully. Apr 17 02:42:07.744494 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 02:42:07.745692 systemd-logind[1598]: Session 20 logged out. Waiting for processes to exit. Apr 17 02:42:07.747003 systemd-logind[1598]: Removed session 20. Apr 17 02:42:12.748275 systemd[1]: Started sshd@20-10.0.0.8:22-10.0.0.1:33500.service - OpenSSH per-connection server daemon (10.0.0.1:33500). Apr 17 02:42:12.837496 sshd[5882]: Accepted publickey for core from 10.0.0.1 port 33500 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:42:12.839291 sshd-session[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:12.845651 systemd-logind[1598]: New session 21 of user core. Apr 17 02:42:12.850070 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 02:42:13.138029 sshd[5885]: Connection closed by 10.0.0.1 port 33500 Apr 17 02:42:13.138563 sshd-session[5882]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:13.157706 systemd[1]: sshd@20-10.0.0.8:22-10.0.0.1:33500.service: Deactivated successfully. Apr 17 02:42:13.188583 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 02:42:13.195050 systemd-logind[1598]: Session 21 logged out. Waiting for processes to exit. Apr 17 02:42:13.211638 systemd-logind[1598]: Removed session 21. Apr 17 02:42:18.309894 systemd[1]: Started sshd@21-10.0.0.8:22-10.0.0.1:33532.service - OpenSSH per-connection server daemon (10.0.0.1:33532). Apr 17 02:42:18.666150 sshd[5958]: Accepted publickey for core from 10.0.0.1 port 33532 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:42:18.686288 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:18.729399 kubelet[2774]: I0417 02:42:18.728561 2774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 02:42:18.898504 systemd-logind[1598]: New session 22 of user core. Apr 17 02:42:18.917345 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 02:42:19.056416 kubelet[2774]: E0417 02:42:19.054365 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:19.829209 sshd[5961]: Connection closed by 10.0.0.1 port 33532 Apr 17 02:42:19.831285 sshd-session[5958]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:19.842517 systemd[1]: sshd@21-10.0.0.8:22-10.0.0.1:33532.service: Deactivated successfully. Apr 17 02:42:19.853655 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 02:42:19.855861 systemd-logind[1598]: Session 22 logged out. Waiting for processes to exit. Apr 17 02:42:19.857114 systemd-logind[1598]: Removed session 22. Apr 17 02:42:24.049685 kubelet[2774]: E0417 02:42:24.049593 2774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 02:42:24.850105 systemd[1]: Started sshd@22-10.0.0.8:22-10.0.0.1:60916.service - OpenSSH per-connection server daemon (10.0.0.1:60916). Apr 17 02:42:24.998735 sshd[5976]: Accepted publickey for core from 10.0.0.1 port 60916 ssh2: RSA SHA256:NqGHB9rSHtMUQkJqVjWjGfsIhZEnp/FTWIhF91T30gc Apr 17 02:42:25.016373 sshd-session[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 02:42:25.037771 systemd-logind[1598]: New session 23 of user core. Apr 17 02:42:25.045125 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 02:42:25.290937 sshd[5979]: Connection closed by 10.0.0.1 port 60916 Apr 17 02:42:25.292246 sshd-session[5976]: pam_unix(sshd:session): session closed for user core Apr 17 02:42:25.299700 systemd[1]: sshd@22-10.0.0.8:22-10.0.0.1:60916.service: Deactivated successfully. Apr 17 02:42:25.315117 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 02:42:25.317682 systemd-logind[1598]: Session 23 logged out. Waiting for processes to exit. Apr 17 02:42:25.319066 systemd-logind[1598]: Removed session 23.